1
Robust Regularized Least-Squares Beamforming Approach to Signal Estimation Mohamed Suliman, Tarig Ballal, and Tareq Y. Al-Naffouri King Abdullah University of Science and Technology (KAUST) Emails:{mohamed.suliman, tarig.ahmed, tareq.alnaffouri}@kaust.edu.sa 1. Abstract We address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of an ill- conditioned covariance matrix of the received signals. Sec- ondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, we manipulate the standard capon beamformer to a form where the beamformer output is obtained as a scaled version of the inner product of two vec- tors that are linearly related to the steering vector and the re- ceived signal snapshot. The linear operator, in both cases, is the square root of the covariance matrix. We proposed a new regularized least-squares (RLS) approach to estimate these two vectors and to provide robustness with- out any prior information. 2. Background Let us consider the linear model r = Ax + v, (1) where A C m×n is a Hermitian matrix. v is AWGN noise vector with unknown variance σ 2 v . Estimating x using the least-squares (LS) leads to a solution that is very sensitive to perturbations in the data. To overcome this difficulty, regularization methods are fre- quently used. We are particularly interested in the RLS given by ˆ x RLS =(A H A + γ I) -1 A H r, (2) Several methods have been proposed to select γ L-curve. generalized cross validation (GCV). quasi-optimal. 3. Proposed Beamforming Approach The output of a beamformer for an array with n e elements, at time instant t, is y BF [t]= w H y[t], (3) where: w C n e is the weighting coefficients vector. y[t] C n e is the array observations (snapshots) vector. For the Capon/MVDR beamformer, w is given by w MVDR = ˆ C -1 yy a a H ˆ C -1 yy a , (4) where: a: array steering vector. ˆ C yy : sample covariance matrix of y ˆ C yy = 1 n s n s X t=1 y[t]y[t] H . (5) The difficulty with the MVDR beamformer is due to the ill- conditionedness of the matrix ˆ C yy and the uncertainty in the steering vector a. Based on (3) and (4), we can write y BF [t]= a H ˆ C - 1 2 yy ˆ C - 1 2 yy y a H ˆ C - 1 2 yy ˆ C - 1 2 yy a = b H z b H b , (6) where: b , ˆ C - 1 2 yy a and z , ˆ C - 1 2 yy y. b and z can be thought of as a = ˆ C 1 2 yy b, (7) and y = ˆ C 1 2 yy z. (8) Since ˆ C 1 2 yy is ill-conditioned, direct inversion does not provide a viable solution. Given that a and y are noisy, we propose using a regularization algorithm to estimate b and z based on (7) and (8). Using (2) for A = ˆ C 1 2 yy and the eigenvalue decomposition (EVD) ˆ C yy = 2 U H , the beamformer output using RLS will take the form y BF-RLS = a H U ( Σ 2 + γ b I ) -1 ( Σ 2 + γ z I ) -1 Σ 2 U H y a H U ( Σ 2 + γ b I ) -2 Σ 2 U H a , (9) where: γ b and γ z are the regularization parameters pertaining to the linear systems (7) and (8), respectively. Equation (9) suggests that the weighting coefficients are given by w BF-RLS = a H U ( Σ 2 + γ b I ) -1 ( Σ 2 + γ z I ) -1 Σ 2 U H a H U ( Σ 2 + γ b I ) -2 Σ 2 U H a . (10) Existing regularization methods can be used to find γ b and γ z in (10). We will introduce a new regularization approach called MVDR constrained perturbation regularization approach (MVDR- COPRA) that is based on exploiting the eigenvalue structure of ˆ C 1 2 yy in order to find γ b and γ z in (10). To this end, we replace A in (1) by ˆ C 1 2 yy to obtain the model r = ˆ C 1 2 yy x + v. (11) 4. Proposed MVDR-COPRA As a form of regularization, we allow a perturbation Δ into ˆ C 1 2 yy . This perturbation is aimed to improve the eigenvalue structure of ˆ C 1 2 yy . To maintain the balance between improving the eigenvalue structure and maintaining the fidelity of the model in (11), we add the constraint ||Δ|| 2 λ, λ R + . Thus, (11) is modified to r ˆ C 1 2 yy + Δ x + v. (12) Assuming that we know the best choice of λ, we consider min- imizing the worst-case residual function of (12) min ˆ x max Δ ||r - ˆ C 1 2 yy + Δ ˆ x|| 2 subject to ||Δ|| 2 λ. (13) It can be shown that (13) is equivalent to min ˆ x ||r - ˆ C 1 2 yy ˆ x|| 2 + λ || ˆ x|| 2 . (14) The solution to (14) is given by: ˆ x = ˆ C yy + γ I -1 ˆ C 1 2 yy r, (15) where γ is obtained by solving G (γ )= y T U Σ 2 - λ 2 I Σ 2 + γ I -2 U T y =0. (16) The solution requires knowledge of λ, which we do not know. By taking the expectation to (16) we can manipulate to get λ 2 o = σ 2 v tr Σ 2 ( Σ 2 + γ o I ) -2 + tr Σ 2 ( Σ 2 + γ o I ) -2 Σ 2 U H C xx U σ 2 v tr ( Σ 2 + γ o I ) -2 + tr ( Σ 2 + γ o I ) -2 Σ 2 U H C xx U . (17) Divide Σ into n 1 large and n 2 small eigenvalues. Write Σ = Σ 1 0 0 Σ 2 and U =[U 1 U 2 ]. Σ 1 C n 1 ×n 1 (large eigenvalues). Σ 2 C n 2 ×n 2 (small eigenvalues). kΣ 2 k 2 kΣ 1 k 2 . Apply the partitioning to (17), with some manipulations and reasonable approximations to get λ 2 o tr Σ 2 1 ( Σ 2 1 + γ o I 1 ) -2 Σ 2 1 + n 1 σ 2 v tr(C xx ) I 1 tr ( Σ 2 1 + γ o I 1 ) -2 Σ 2 1 + n 1 σ 2 v tr(C xx ) I 1 + n 2 γ 2 o n 1 σ 2 v tr(C xx ) . (18) Problem: λ o dependents on σ 2 v and C xx which are not known. We apply the MSE criterion to eliminate this dependency and to set λ o that minimizes the MSE approximately. 5. Minimizing the MSE The MSE for an estimate ˆ x of x can be defined as MSE = tr n E x - x)(ˆ x - x) H o , (19) We manipulate the MSE to the form: MSE = σ 2 v tr Σ 2 ( Σ 2 + γ I ) -2 + γ 2 tr ( Σ 2 + γ I ) -2 U H C xx U . (20) The first derivative can be obtained as (MSE) ∂γ = -2σ 2 v tr Σ 2 ( Σ 2 + γ I ) -3 +2γ tr Σ 2 ( Σ 2 + γ I ) -3 U H C xx U =0. (21) Solving (21) does not provide a closed-form expression for γ o . By using an approximation we obtain γ o n e σ 2 v tr (C xx ) . (22) From (18) we can replace the following n 1 σ 2 v tr (C xx ) n 1 n e γ o . Substitute this results in (18) and then substitute (18) in (16). MVDR-COPRA characteristic equation: S (γ o )= tr Σ 2 ( Σ 2 + γ o I ) -2 dd H tr ( Σ 2 1 + γ o I 1 ) -2 ( β Σ 2 1 + γ o I 1 ) - tr ( Σ 2 + γ o I ) -2 dd H tr Σ 2 1 ( Σ 2 1 + γ o I 1 ) -2 ( β Σ 2 1 + γ o I 1 ) + n 2 γ o tr Σ 2 ( Σ 2 + γ o I ) -2 dd H =0, (23) where d , U T y, and β , n e n 1 . 6. Simulation Results Setup: Uniform linear array with 10 elements placed at half of the wavelength of the signal of interest and two interfering sig- nals. The directions of arrival (DOA) for the signal of interest and the interference are generated from a uniform distribution in the interval [-90 o , 90 o ]. The steering vector a is calculated from the true DOA of the signal of interest plus a uniformly distributed error in the in- terval [-5 o , 5 o ]. SNR [dB] -10 -5 0 5 10 15 20 SINR [dB] -10 -5 0 5 10 15 20 25 30 Optimal MVDR-COPRA RAB MVDR RAB SDP LCMV RVO LCMV Quasi MVDR Figure 1: Output SINR vs input SNR for n s = 30. Number of snapshots 10 20 30 40 50 60 70 80 SINR [dB] 5 10 15 20 25 30 Optimal MVDR-COPRA RAB MVDR RAB SDP LCMV RVO LCMV Quasi MVDR Figure 2: Output SINR vs number of snapshots at SNR = 20 dB. 7. Conclusions The robust MVDR beamforming is converted to a pair of lin- ear estimation problems with ill-conditioned matrices and new regularization method is proposed to solve these problems. Simulations demonstrate that the proposed approach outper- forms a number of benchmark methods in terms of SINR.

Center for Uncertainty Quantification Logo Lock-up Mohamed ... · Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques Alexander Litvinenko 1, Wolfgang

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Center for Uncertainty Quantification Logo Lock-up Mohamed ... · Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques Alexander Litvinenko 1, Wolfgang

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

Kriging accelerated by orders of

magnitude: combining low-rank with

FFT techniquesAlexander Litvinenko1, Wolfgang Nowak2

1 CEMSE Division, King Abdullah University of Science and Technology,2 SimTech, Universitat Stuttgart, Germany

[email protected],[email protected]

Center for UncertaintyQuantification

Center for UncertaintyQuantification

Center for Uncertainty Quantification Logo Lock-up

Abstract

Kriging algorithms based on FFT, the separability of certain co-variance functions and low-rank representations of covariancefunctions have been investigated. The current study combinesthese ideas, and so combines the individual speedup factors ofall ideas. The reduced computational complexity is O(dLlogL),where L := maxini, i = 1..d. For separable covariance functions,the results are exact, and non-separable covariance functionscan be approximated through sums of separable components.Speedup factor is 108, problem sizes 15e + 12 and 2e + 15 estima-tion points for Kriging and spatial design.

1. Kriging

Task 1: Let m be number of measurement points, n number ofestimation points. Let s 2 Rn be the kriging vector to be esti-mated with mean µs = 0 and cov. matrix Qss 2 Rn⇥n:

s = Qsy Q�1yy y

| {z }⇠

.

where y 2 Rm vector of measurement values, Qsy 2 Rn⇥m crosscov. matrix and Qyy 2 Rm⇥m auto cov. matrix.Task 2: Estimation of Variance �s 2 Rn Let Qss|y be the condi-tional covariance matrix. Then

�s = diag(Qss|y) = diag⇣Qss � QsyQ

�1yy Qys

= diag (Qss) �mX

i=1

h(QsyQ

�1yy ei) � QT

ys(i)i,

where QTys(i) is the transpose of the i-th row in Qys.

Task 3: Geostatistical optimal design The goal is to optimizesampling patterns from which the data values in y are to be ob-tained. Two most common objective function to be minimizedare:

�A = n�1 tracehQss|y

iand

�C = cTQss|yc = cT (Qss � QsyQ�1yy Qys)c

= �2z � (cTQsy)Q�1

yy (Qysc),

with �2z = cTQssc.

Any Toeplitz covariance matrix Qss 2 Rn⇥n (the first columnis denoted by q) can be embedded in a larger circulant matrixQ 2 Rn⇥n (the first column is denoted by q).Sampling and Injection: Consider the m ⇥ n sampling matrix H:

Hi,j =

⇢1 for xi = xj0 otherwise ,

where xi are the coordinates of the i-th measurement location iny, and xj are the coordinates of the j-th estimation point in s.

sampling: ⇠m⇥1 = Hun⇥1

injection: un⇥1 = HT⇠m⇥1

Qys = HQss and Qsy = QssHT

Qyy = HQssHT + R

Qsy⇠ = QssHT⇠ = Qss

⇣HT⇠

| {z }u

.

Embedding and extraction: Let M 2 Rn⇥n maps the entries ofthe finite embedded domain onto the periodic embedding do-main. M has one single entry of unity per column. Extraction

of an embedded Toeplitz matrix Qss from the embedding circu-lant matrix Q as follows:

Qss = MT QM, q = Mq.

Kriging vector can be efficiently estimated via d-dimensional FFT:

(Qsy⇠ =)Qssu = MT QMu = MTF [�d]⇣F [d] (q) � F [d] (Mu)

⌘.

2. Low-rank kriging via FFT

Lemma 1 Let u =Pku

j=1

Ndi=1 uji, where u 2 Rn, uji 2 Rni

and n =Qd

i=1 ni. Then the d-dimensional Fourier transformationu = F [d](u) is

u =

kuX

j=1

dO

i=1

�Fi

�uji

��

Lemma 2 Let u and q have CP representation, then

u � q =

0@

kuX

j=1

dO

i=1

uji

1A �

0@

kqX

`=1

dO

i=1

q`i

1A =

kuX

j=1

kqX

`=1

dO

i=1

�uji � q`i

�.

F [d](u � q) =

kuX

`=1

kqX

j=1

dO

i=1

�Fi

�uji � q`i

��.

2.1 AccuracyIf

kq � q(kq)kF "q,kq � q(kq)kF

kqkF "rel,q.

Then

kQss � Q(kq)ss kF p

n"q,kQss � Q

(kq)ss kF

kQsskF "rel,q .

2.2 d-dimensional embedding/extraction

Since M[d] =Nd

i=1 Mi, have

u = M[d]u =

0@

dO

i=1

Mi

1A·

kuX

j=1

dO

i=1

uji =

kuX

j=1

dO

i=1

Miuji =:

kuX

j=1

dO

i=1

uji,

2.3 Low-rank kriging estimate Qssu

Qssu ⇡ Q(kq)ss u(ku) =

kqX

`=1

kuX

j=1

dO

i=1

MTi F�1

i

⇥(Fiq`i) � (Fiuji)

⇤.

with accuracy

kQssu � Q(kq)ss u(ku)kF p

n"qkuk + kQssk · "u.

The total costs is O(kukqd L⇤ log L⇤) instead of O(n log n), withL⇤ = maxi=1...d ni and n =

Qdi=1 n.

Kriging estimator

s = Qsy⇠ = QssHT⇠ = MT QMHT⇠.

can be written in the CP tensor format

s = Qsy⇠ =

mX

j=1

⇠j

kqX

`=1

dO

i=1

⇣MT

i F�1i

⇥(Fiq`i) � (Fihji)

⇤⌘.

3. Numerics: CPU time and storage

Figure 1: CPU time of the four different methods depending onthe number of lattice points in a rectangular domain.

Figure 2: Memory requirements of the four different methods de-pending on the number of lattice points in a rectangular domain.

Example: concentration of an ore mineral

Domain: 20m ⇥ 20m ⇥ 20m, n = 250003 dofs., m = 4000measurements randomly distributed within the volume, withincreasing data density towards the lower left back cor-ner of the domain. The covariance model is anisotropicGaussian with unit variance and with 32 correlation lengthsfitting into the domain in the horizontal directions, and64 correlation lengths fitting into the vertical direction.

Figure 3: The top left figure shows the entire domain at a sam-pling rate of 1:64 per direction, and then a series of zooms intothe respective lower left back corner with zoom factors (samplingrates) of 4 (1:16), 16 (1:4), 64 (1:1) for the top right, bottom leftand bottom right plots, respectively. Color scale: showing the95% confidence interval [µ � 2�, µ + 2�].

4. Conclusion

We develop new algorithms for large-scale Kriging problems (in-cluding the estimation variance and measures for the optimalityof sampling patterns), combining low-rank tensor approximationswith existing fast methods based on the FFT. The computationalcost is O(kqmdL⇤ log L⇤), where kq is the rank, m number of mea-surements, d dimension and L⇤ = max(ni) with n =

Qdi=1 ni.

Memory cost: O�[kq + m]dL

�, where L =

Pdi=1 ni.

• 2D Kriging with 2.7e+7 estimation points and 100 measure-ment values takes 0.25 sec.,

• the estimation variance takes < 1 sec.,

• the spatial average of the estimation variance (the A-criterionof geostat. optimal design) for 2 · 1012 estim. points takes 30sec.,

• the C-criterion of geostat. optimal design for 2 · 1015 estimationpoints takes 30 sec.,

• 3D Kriging problem with 15 · 1012 estimation points and 4000measurement data values takes 20 sec.

Acknowledgements

A. Litvinenko is a member of the KAUST SRI Center for Uncer-tainty Quantification in Computational Science and Engineering.Additionally this research has been funded by the Cluster of Ex-cellence in Simulation Technology (EXC 310/1) at the Universityof Stuttgart, and by the joint DFG project (NO 805/3-1 and MA2236/16-1).

References

[1] W. Nowak, A. Litvinenko, Kriging accelerated by orders ofmagnitude: combining low-rank covariance approximations withFFT-techniques, Mathematical Geosciences, 2013, Vol. 45, Is-sue 4, pp 411-435.

Robust Regularized Least-Squares

Beamforming Approach to Signal

EstimationMohamed Suliman, Tarig Ballal, and Tareq Y. Al-Naffouri

King Abdullah University of Science and Technology (KAUST)Emails:{mohamed.suliman, tarig.ahmed,

tareq.alnaffouri}@kaust.edu.sa

1. Abstract

•We address the problem of robust adaptive beamforming ofsignals received by a linear array.• The challenge associated with the beamforming problem is

twofold. Firstly, the process requires the inversion of an ill-conditioned covariance matrix of the received signals. Sec-ondly, the steering vector pertaining to the direction of arrivalof the signal of interest is not known precisely.• To tackle these two challenges, we manipulate the standard

capon beamformer to a form where the beamformer output isobtained as a scaled version of the inner product of two vec-tors that are linearly related to the steering vector and the re-ceived signal snapshot. The linear operator, in both cases, isthe square root of the covariance matrix.•We proposed a new regularized least-squares (RLS) approach

to estimate these two vectors and to provide robustness with-out any prior information.

2. Background

• Let us consider the linear model

r = Ax + v, (1)

where– A ∈ Cm×n is a Hermitian matrix.– v is AWGN noise vector with unknown variance σ2v.• Estimating x using the least-squares (LS) leads to a solution

that is very sensitive to perturbations in the data.• To overcome this difficulty, regularization methods are fre-

quently used. We are particularly interested in the RLS givenby

xRLS = (AHA + γI)−1AHr, (2)

• Several methods have been proposed to select γ– L-curve.– generalized cross validation (GCV).– quasi-optimal.

3. Proposed Beamforming Approach

• The output of a beamformer for an array with ne elements, attime instant t, is

yBF[t] = wHy[t], (3)where:– w ∈ Cne is the weighting coefficients vector.– y[t] ∈ Cne is the array observations (snapshots) vector.

• For the Capon/MVDR beamformer, w is given by

wMVDR =C−1yya

aHC−1yya, (4)

where:– a: array steering vector.– Cyy: sample covariance matrix of y

Cyy =1

ns

ns∑

t=1

y[t]y[t]H . (5)

• The difficulty with the MVDR beamformer is due to the ill-conditionedness of the matrix Cyy and the uncertainty in thesteering vector a.• Based on (3) and (4), we can write

yBF[t] =aHC

−12

yyC−1

2yyy

aH C−1

2yyC

−12

yya=

bHz

bHb, (6)

where:

– b , C−1

2yya and z , C

−12

yyy.• b and z can be thought of as

a = C12yyb, (7)

and

y = C12yyz. (8)

• Since C12yy is ill-conditioned, direct inversion does not provide

a viable solution.

•Given that a and y are noisy, we propose using a regularizationalgorithm to estimate b and z based on (7) and (8).

•Using (2) for A = C12yy and the eigenvalue decomposition

(EVD) Cyy = UΣ2UH , the beamformer output using RLS willtake the form

yBF-RLS =aHU

(Σ2 + γbI

)−1 (Σ2 + γzI

)−1Σ2UHy

aHU(Σ2 + γbI

)−2Σ2UHa

, (9)

where:

– γb and γz are the regularization parameters pertaining to thelinear systems (7) and (8), respectively.

• Equation (9) suggests that the weighting coefficients are givenby

wBF-RLS =aHU

(Σ2 + γbI

)−1 (Σ2 + γzI

)−1Σ2UH

aHU(Σ2 + γbI

)−2Σ2UHa

. (10)

• Existing regularization methods can be used to find γb and γzin (10).•We will introduce a new regularization approach called MVDR

constrained perturbation regularization approach (MVDR-COPRA) that is based on exploiting the eigenvalue structure

of C12yy in order to find γb and γz in (10).

• To this end, we replace A in (1) by C12yy to obtain the model

r = C12yyx + v. (11)

4. Proposed MVDR-COPRA

• As a form of regularization, we allow a perturbation ∆ into

C12yy.

• This perturbation is aimed to improve the eigenvalue structure

of C12yy.

• To maintain the balance between improving the eigenvaluestructure and maintaining the fidelity of the model in (11), weadd the constraint ||∆||2 ≤ λ, λ ∈ R+.• Thus, (11) is modified to

r ≈(

C12yy + ∆

)x + v. (12)

• Assuming that we know the best choice of λ, we consider min-imizing the worst-case residual function of (12)

minx

max∆||r−

(C

12yy + ∆

)x||2

subject to ||∆||2 ≤ λ. (13)

• It can be shown that (13) is equivalent to

minx||r− C

12yyx||2 + λ ||x||2. (14)

• The solution to (14) is given by:

x =(Cyy + γI

)−1C

12yyr, (15)

where γ is obtained by solving

G(γ) = yTU(Σ2 − λ2I

)(Σ2 + γI

)−2UTy = 0. (16)

• The solution requires knowledge of λ, which we do not know.• By taking the expectation to (16) we can manipulate to get

λ2o =σ2v tr

(Σ2(Σ2 + γoI

)−2)+ tr

(Σ2(Σ2 + γoI

)−2Σ2UHCxxU

)

σ2v tr((

Σ2 + γoI)−2)

+ tr((

Σ2 + γoI)−2

Σ2UHCxxU) . (17)

•Divide Σ into n1 large and n2 small eigenvalues.

•Write Σ =

[Σ1 00 Σ2

]and U = [U1 U2].

– Σ1 ∈ Cn1×n1 (large eigenvalues).

– Σ2 ∈ Cn2×n2 (small eigenvalues).

– ‖Σ2‖2� ‖Σ1‖2.• Apply the partitioning to (17), with some manipulations and

reasonable approximations to get

λ2o ≈tr(Σ21

(Σ21 + γoI1

)−2 (Σ21 +

n1σ2v

tr(Cxx)I1

))

tr((

Σ21 + γoI1

)−2 (Σ21 +

n1σ2vtr(Cxx)

I1

))+ n2γ2o

n1σ2vtr(Cxx)

. (18)

• Problem: λo dependents on σ2v and Cxx which are not known.•We apply the MSE criterion to eliminate this dependency and

to set λo that minimizes the MSE approximately.

5. Minimizing the MSE

• The MSE for an estimate x of x can be defined as

MSE = tr{E((x− x)(x− x)H

)}, (19)

•We manipulate the MSE to the form:

MSE = σ2vtr(Σ2(Σ2 + γI

)−2)+ γ2tr

((Σ2 + γI

)−2UHCxxU

).

(20)

• The first derivative can be obtained as

∂ (MSE)∂ γ

= −2σ2vtr(Σ2(Σ2 + γI

)−3)+ 2γ tr

(Σ2(Σ2 + γI

)−3UHCxxU

)= 0.

(21)

• Solving (21) does not provide a closed-form expression for γo.By using an approximation we obtain

γo ≈neσ

2v

tr (Cxx). (22)

• From (18) we can replace the following

n1σ2v

tr (Cxx)→ n1

neγo.

• Substitute this results in (18) and then substitute (18) in (16).

•MVDR-COPRA characteristic equation:

S (γo) = tr(Σ2(Σ2 + γoI

)−2ddH

)tr((

Σ21 + γoI1

)−2 (βΣ2

1 + γoI1))

− tr((

Σ2 + γoI)−2

ddH)

tr(Σ2

1

(Σ2

1 + γoI1)−2 (

βΣ21 + γoI1

))

+n2γo

tr(Σ2(Σ2 + γoI

)−2ddH

)= 0, (23)

where d , UTy, and β , nen1

.

6. Simulation Results

• Setup:

– Uniform linear array with 10 elements placed at half of thewavelength of the signal of interest and two interfering sig-nals.

– The directions of arrival (DOA) for the signal of interest andthe interference are generated from a uniform distribution inthe interval [−90o, 90o].

– The steering vector a is calculated from the true DOA of thesignal of interest plus a uniformly distributed error in the in-terval [−5o, 5o].

SNR [dB]-10 -5 0 5 10 15 20

SIN

R [

dB]

-10

-5

0

5

10

15

20

25

30

OptimalMVDR-COPRARAB MVDRRAB SDPLCMVRVO LCMVQuasiMVDR

Figure 1: Output SINR vs input SNR for ns = 30.

Number of snapshots10 20 30 40 50 60 70 80

SIN

R [

dB]

5

10

15

20

25

30

OptimalMVDR-COPRA RAB MVDRRAB SDPLCMVRVO LCMVQuasiMVDR

Figure 2: Output SINR vs number of snapshots at SNR = 20 dB.

7. Conclusions

• The robust MVDR beamforming is converted to a pair of lin-ear estimation problems with ill-conditioned matrices and newregularization method is proposed to solve these problems.

• Simulations demonstrate that the proposed approach outper-forms a number of benchmark methods in terms of SINR.