8
A heuristic search algorithm for the multiple measurement vectors problem Xinpeng Du a,n , Lizhi Cheng b,c , Guangquan Cheng d a State Key Laboratory of astronautic Dynamics, Xi'an Satellite Control Center, Xi'an, 710043 Shanxi, China b Department of Mathematics and System Science, College of Science, National University of Defense Technology, Changsha, 410073 Hunan, China c The State Key Laboratory for High Performance Computation, National University of Defense Technology, Changsha, 410073 Hunan, China d Science and Technology on Information Systems Engineering Laboratory, College of Information System and Management, National University of Defense Technology, Changsha, 410073 Hunan, China article info Article history: Received 2 April 2013 Received in revised form 15 December 2013 Accepted 2 January 2014 Available online 10 January 2014 Keywords: Multiple measurement vectors Compressed sensing Heuristic search Simulated annealing Greedy pursuit abstract In this paper, we address the multiple measurement vectors problem, which is now a hot topic in the compressed sensing theory and its various applications. We propose a novel heuristic search algorithm called HSAMMV to solve the problem, which is modeled as a combinatorial optimization. HSAMMV is proposed in the framework of simulated annealing algorithm. The main innovation is to take advantage of some greedy pursuit algorithms for designing the initial solution and the generating mechanism of HSAMMV. Compared with some state-of-the-art algorithms, the numerical simulation results illustrate that HSAMMV has strong global search ability and quite good recovery performance. & 2014 Elsevier B.V. All rights reserved. 1. Introduction The sparse recovery problem is of great importance in compressed sensing (CS) theory [13], and it focuses on finding the sparsest possible solution of the underdeter- mined linear system of equations Ax ¼ b for given matrix A A R mn ðm onÞ and vector b A R m . The problem can be written as min x x0 s:t: Ax ¼ b ð1Þ where x0 denotes the number of nonzero entries of x. There is only a single measurement (i.e., the vector b) in problem (1), so it is referred to as the single measurement vector (SMV) problem. A natural extension of SMV, multiple measurement vectors (MMV, also called joint sparse recov- ery), attracts increasing attention of the research community and can be used in source localization [46], multi-task learning [7] and neuro-magnetic imaging [8] etc. In MMV, we are given multiple measurements B A R ml with the number of snapshots l 41, and aim to solve the linear system of equations AX¼ B in which X is supposed to be jointly sparse (i.e., only a few rows are nonzero). The noiseless MMV can be modeled as min X jRðXÞj s:t: AX ¼ B ð2Þ where RðXÞ 9f1 ri rnjX i; a0g denotes the row support of X, X i; denotes the i-th row of X, and jj represents the cardinality of a set. In [810], the authors have proved that a solution X of AX ¼ B is the unique solution of (2) if R X ð Þ o sparkðAÞþ rankðBÞ 1 2 ð3Þ Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/sigpro Signal Processing 0165-1684/$ -see front matter & 2014 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.sigpro.2014.01.002 n Corresponding author. E-mail addresses: [email protected] (X. Du), [email protected] (L. Cheng), [email protected] (G. Cheng). Signal Processing 100 (2014) 18

A heuristic search algorithm for the multiple measurement vectors problem

Embed Size (px)

Citation preview

Page 1: A heuristic search algorithm for the multiple measurement vectors problem

Contents lists available at ScienceDirect

Signal Processing

Signal Processing 100 (2014) 1–8

0165-16http://d

n CorrE-m

clzchen

journal homepage: www.elsevier.com/locate/sigpro

A heuristic search algorithm for the multiple measurementvectors problem

Xinpeng Du a,n, Lizhi Cheng b,c, Guangquan Cheng d

a State Key Laboratory of astronautic Dynamics, Xi'an Satellite Control Center, Xi'an, 710043 Shanxi, Chinab Department of Mathematics and System Science, College of Science, National University of Defense Technology,Changsha, 410073 Hunan, Chinac The State Key Laboratory for High Performance Computation, National University of Defense Technology, Changsha, 410073 Hunan, Chinad Science and Technology on Information Systems Engineering Laboratory, College of Information System and Management,National University of Defense Technology, Changsha, 410073 Hunan, China

a r t i c l e i n f o

Article history:Received 2 April 2013Received in revised form15 December 2013Accepted 2 January 2014Available online 10 January 2014

Keywords:Multiple measurement vectorsCompressed sensingHeuristic searchSimulated annealingGreedy pursuit

84/$ - see front matter & 2014 Elsevier B.V.x.doi.org/10.1016/j.sigpro.2014.01.002

esponding author.ail addresses: [email protected] (X. Du),[email protected] (L. Cheng), [email protected]

a b s t r a c t

In this paper, we address the multiple measurement vectors problem, which is now a hottopic in the compressed sensing theory and its various applications. We propose a novelheuristic search algorithm called HSAMMV to solve the problem, which is modeled asa combinatorial optimization. HSAMMV is proposed in the framework of simulatedannealing algorithm. The main innovation is to take advantage of some greedypursuit algorithms for designing the initial solution and the generating mechanism ofHSAMMV. Compared with some state-of-the-art algorithms, the numerical simulationresults illustrate that HSAMMV has strong global search ability and quite good recoveryperformance.

& 2014 Elsevier B.V. All rights reserved.

1. Introduction

The sparse recovery problem is of great importance incompressed sensing (CS) theory [1–3], and it focuses onfinding the sparsest possible solution of the underdeter-mined linear system of equations Ax¼b for given matrixAARm�n ðmonÞ and vector bARm. The problem can bewritten as

minx

‖x‖0 s:t: Ax¼ b ð1Þ

where ‖x‖0 denotes the number of nonzero entries of x.There is only a single measurement (i.e., the vector b) in

problem (1), so it is referred to as the single measurementvector (SMV) problem. A natural extension of SMV, multiple

All rights reserved.

(G. Cheng).

measurement vectors (MMV, also called joint sparse recov-ery), attracts increasing attention of the research communityand can be used in source localization [4–6], multi-tasklearning [7] and neuro-magnetic imaging [8] etc. In MMV,we are given multiple measurements BARm�l with thenumber of snapshots l41, and aim to solve the linear systemof equations AX¼B in which X is supposed to be jointlysparse (i.e., only a few rows are nonzero). The noiseless MMVcan be modeled as

minX

jRðXÞj s:t: AX ¼ B ð2Þ

where RðXÞ9f1r irnjXi;�a0g denotes the row supportof X, Xi;� denotes the i-th row of X, and j � j represents thecardinality of a set.

In [8–10], the authors have proved that a solution X ofAX¼B is the unique solution of (2) if

R Xð Þ��

��o sparkðAÞþrankðBÞ�1

2ð3Þ

Page 2: A heuristic search algorithm for the multiple measurement vectors problem

X. Du et al. / Signal Processing 100 (2014) 1–82

where the quantity sparkðAÞ denotes the smallest numberof columns from A that are linearly dependent.

The model (2) is noiseless, and it is oversimple. Inpractice, it generally cannot avoid modeling error andmeasurement error. And we often modify the model withadditive noise to deal with both situations. An MMV withadditive noise can be stated as

minX

jRðXÞj s:t: ‖AX�B‖Frɛ ð4Þ

where B¼ AXþN, N is the additive noise and ɛZ0 is theerror bound. It is obvious that problem (2) is a particularcase of problem (4) with N¼ 0.

Problem (4) is NP-hard in general [2], and many efficientalgorithms have been proposed to solve it, such as l1-SVD[4], M-FOCUSS (FOCal Underdetermined System Solver forMMV) [8], M-OMP (Orthogonal Matching Pursuit for MMV)[9], ReMBo (Reduce MMV and Boost) [10], RA-ORMP (RankAware Order Recursive Matching Pursuit) [11], CS-MUSIC(Compressive MUSIC) [12], RPMB (Randomly Project MMVand Boost) [13], q-thresholding algorithm (qZ1) [14],SA-MUSIC (Subspace-Augmented MUSIC) [15], T-MSBL (aTemporal extension of the Sparse Bayesian Learning algo-rithm for MMV) [16], AMPMMV (Approximate MessagePassing based MMV algorithm) [17], and ZAPMMV (Zero-point Attracting Projection algorithm for MMV) [18].

Although the existing algorithms can achieve satisfactoryrecovery under specific conditions, they perform well onlywhen the number of snapshots is relatively large or thesparsity level is relatively small [12,13]. Moreover, many ofthe existing algorithms perform unsatisfactorily in the rank-defective case (i.e., rankðXÞo jRðXÞj [15]), such as M-OMP,M-FOCUSS and l1-SVD.

One main reason resulting in the aforementioned short-comings is that the existing MMV algorithms often producesub-optimums of problem (4). The sparse recovery problemis essentially a combinatorial optimization [2], and we knowthat the simulated annealing (SA) algorithm [19] is veryefficient in finding the global optimums for the combinatorialoptimization problems. Therefore, we take advantage of SAto propose a new MMV algorithm to overcome the short-comings in this paper. We first model the MMV problem as acombinatorial optimization, and then propose a novel heur-istic search algorithm termed HSAMMV to solve the modeledproblem based on SA and some existing CS algorithms. InHSAMMV, the initial solution is designed using the q-thresh-olding algorithm (qZ1) [14], and the generating mechanismis designed using the pruning technique existed in SP (Sub-space Pursuit) [20] and CoSaMP (Compressive SamplingMatching Pursuit) [21]. Compared with some state-of-the-art algorithms, the numerical simulation results illustrate thatHSAMMV has strong global search ability and quite goodrecovery performance. Specifically, HSAMMV still performswell when the number of snapshots is relatively small or thesparsity level is relatively large, and it is effective in the rank-defective case and has a robust performance to the sparsitylevel. In a word, HSAMMV can well overcome the aforemen-tioned shortcomings of the existing MMV algorithms to someextent.

Throughout the paper, we use the following notations.For any matrix MARm�n, Mi;� denotes the i-th row of M,

and M�;j denotes the j-th column of M. The row supportof M is defined as RðMÞ9f1r irnjMi;�a0g. For a columnfull-rank matrix H, its pseudo-inverse is defined by H† ¼ðHTHÞ�1HT , where ‘T’ represents matrix transposition. j � jrepresents the cardinality of a set, and ⌊�c denotes theflooring operation for a real number (i.e., ⌊ςc equals to thenearest integer less than or equal to ς). The matrix M iscalled K-jointly sparse if jRðMÞjrKom, where K is calledthe sparsity level of M. ‖M‖F denotes the Frobenius normof the matrix M, and ‖x‖q ðqZ1Þ represents the lq norm ofthe vector xARn. Suppose GDf1;2;…;ng is a nonemptysubset, the vector xG consists of the entries indexed byiAG, the matrix MG is composed of the columns fM�;jgjAG,and the matrix MG;: denotes a matrix composed of therows fMj;�gjAG.

The rest of the paper is organized as follows. In Section 2,we propose the HSAMMV algorithm and give some theore-tical analysis. Simulation results are reported in Section 3,and the conclusions are drawn in Section 4.

2. HSAMMV: a heuristic search algorithm for MMV

In this section, we first give a brief introduction to SA inSection 2.1, next design the main elements of HSAMMV inSection 2.2, later give the computational complexity analysisof HSAMMV in Section 2.3, and then compare HSAMMVwith some existing works in Section 2.4.

2.1. A short review of SA

Simulated annealing (SA) algorithm was proposed byKirkpatrick et al. [19] in 1983 based on the annealing ofmetals. The main advantage of SA is the ability to avoidbeing stuck at local minimums because it permits accept-ing a less optimal solution (as compared with the currentone) with a positive probability, and hence it has strongglobal search ability [22–24]. SA has attracted much atten-tion due to its success in solving several large scale andcomplex problems, including some NP-complete problemssuch as Traveling Salesman Problem (TSP) [25] and theFlow Shop Scheduling Problem (FSSP) [26].

Consider a minimization problem mintAΩgðtÞ, where Ω isthe solution set and g : Ω-R is a cost function, the proce-dure of SA is summarized in Algorithm 1. In the algorithm,the generating mechanism is a method to select a newsolution from the neighborhood of the current solution; andthe annealing schedule is a sequence of positive real num-bers, which are decreasing to zero, for setting temperatures.

Algorithm 1. SA [27] for solving mintAΩgðtÞ.Initialization: the initial solution t0AΩ, the number of the outer-

loop iteration k¼1.The k-th outer-loop iteration (kZ1):Step 1: (The inner-loop at Tk) First, set t0k ¼ tk�1 and j¼1; next,

repeat steps 1.1–1.3 until some inner-loop stopping criterion ismet and obtain tk; then goto step 2.

Step 1.1: Randomly generate a neighboring solution ykjof the

current solution tj�1k according to some generating mechanism

and compute gðyjkÞ.Step 1.2: Calculate Δj

k ¼ gðyjkÞ�gðtj�1k Þ, and accept yk

jif pjkZη, where

η is a random number uniformly chosen in ½0;1� andpjk ¼minð1; expð�Δj

k=TkÞÞ.

Page 3: A heuristic search algorithm for the multiple measurement vectors problem

X. Du et al. / Signal Processing 100 (2014) 1–8 3

Step 1.3: If ykjis accepted in step 1.2, set tjk ¼ yjk; otherwise, set

tjk ¼ tj�1k .

Step 2: If the outer-loop stopping criterion is satisfied, set t ¼ tk andstop; otherwise, goto step 3.

Step 3: (annealing) Set k¼ kþ1, and determine Tk according tosome annealing schedule, then goto step 1.

Output: t .

2.2. HSAMMV

Algorithm 1 offers a standard procedure of SA, and wenote that there are several main elements in the algorithm,such as cost function, initial solution, generating mechanism,annealing schedule and stopping criterions. In the following,we focus on problem (4) and propose the HSAMMV algo-rithm in the framework of SA. Before proposing HSAMMV,we first introduce its main elements.

� Cost function and its solution set: If the sparsity level Ksatisfying KosparkðAÞ is known as a prior,1 problem (4)can be approximated by

minX

‖AX�B‖F s:t: jRðXÞjrK: ð5Þ

An approach for solving (5) is to first seek a row support Isuch that jI j ¼ K , then obtain an estimated solution usingthe least square method

X I ;: ¼ A†

IB and X S� I ;: ¼ 0 ð6Þ

where S9f1;2;…;ng. The q-thresholding algorithm, M-OMP,ReMBo, RA-ORMP, CS-MUSIC, RPMB and SA-MUSIC all workaccording to this approach. The approach essentially aims tofind a row support I such that I ¼ arg minjIj ¼ K‖AIA

†I B�B‖F .

Inspired by the above illustrations, we define

f ðIÞ ¼ ‖AIA†I B�B‖F=‖B‖F ð7Þ

as the cost function of HSAMMV, where the denominator‖B‖F is used to normalize the numerator ‖AIA

†I B�B‖F .

Thenwe estimate the row support by solving the followingproblem:

minIAΘ

f ðIÞ ð8Þ

where Θ is the solution set consisted of all K-cardinalitysubsets of S. It is easy to know that problem (8) is acombinatorial optimization.

In the following, we give a theorem which reveals therelationship between the solutions of problems (4) and (8).The theorem can be proven using the uniqueness result in[8–10], which has been mentioned in Section 1.

Theorem 1. For problem (4) in the noiseless case, supposethat the sparsity level satisfy Ko ðsparkðAÞþrankðBÞ�1Þ=2,and assume that Xn with row support In is the unique solution.Then we have X ¼ Xn (or InD I) if and only if f ðIÞ ¼ 0, whereIAΘ and X is defined using (6).

� Initial solution: The initial solution I0 in HSAMMVis determined using the q-thresholding algorithm (qZ1) [14],

1 The condition KosparkðAÞ is a basic assumption in the CS theory.In practice, the sparsity level often is unknown, but it can be estimatedusing the method proposed in [21].

i.e.,

I09findices of the K largest values in f‖AT�;jB‖qg

n

j ¼ 1g: ð9Þ

This initial solution is easily obtained and its efficiency hasbeen proven in [14].

� Generating mechanism: In HSAMMV, for the j-thinner-loop iteration at Tk ðkZ1Þ, a new neighboring solu-tion ~I

jk of the current solution Ij�1

k is generated by thefollowing three steps.

First, randomly choose Nc elements from the difference

set S� Ij�1k and form them as a subset Q, where the constant

Nc satisfying 1rNcosparkðAÞ�K is the chosen number.2

Next, denote Jj�1k ¼ Ij�1

k [ Q and obtain a temporary solu-

tion Xj�1k of problem (4) using the least square method

ðX j�1k ÞJj� 1

k;:¼ A†

Jj� 1k

B and ðX j�1k ÞS� Jj� 1

k;:¼ 0: ð10Þ

Then, define

~Ijk ¼ findices of the K largest values in f‖ðX j�1

k Þi;:‖2gn

i ¼ 1g:ð11Þ

In the first step, we add some random components to thecurrent solution, and this makes it possible to find a betterneighboring solution. In the last two steps, we apply thepruning technique, and this can obtain the best solution onthe union set in the sense of the maximum row norm.

� Annealing schedule: In HSAMMV, we apply the fre-quently used annealing schedule Tk ¼ T1=logðkþ1Þ ðkZ2Þ,where the initial temperature T140.

� Stopping criterions: There are two stopping criterionsin HSAMMV (also for other SA algorithms): the inner-loopstopping criterion and the outer-loop stopping criterion.They are designed as follows:

The inner-loop stopping criterion (for the inner-loop at Tk)is when the sequence fIikg

ji ¼ 1 ðjZ2Þ satisfy varð½f ðIj�2

k Þ;f ðIj�1

k Þ; f ðIjkÞ�Þrτ for some small constant τ40, or j attainsthe maximum allowable number of inner-loop iterationsIMmaxAN, where ‘var’ is defined as varða1; a2;…; an Þ9∑n

t ¼ 1ðat�aÞ2=ðn�1Þ (a9∑nt ¼ 1at=n) for n real values.

The outer-loop stopping criterion is if f ðIkÞrε for someconstant εZ0, or k attains the maximum allowable num-ber of outer-loop iterations NAmaxAN.

HSAMMV is summarized in Algorithm 2.

Algorithm 2. HSAMMV for solving minIAΘf ðIÞ.Choose: The initial temperature T1; the sparsity level K; the chosen

number Nc; the stopping parameters ε, τ, IMmax and NAmax.Initialization: The initial solution I0AΘ determined by (9), the

number of the outer-loop iteration k¼1.

Judgement: If f ðI0Þrε, set I ¼ I0 and stop; otherwise, goto theouter-loop iterations.

The k-th (kZ1) outer-loop iteration:

Step 1: (The inner-loop at Tk) First, set I0k ¼ Ik�1 and j¼1; next,

repeat steps 1.1–1.5 until varð½f ðIj�2k Þ; f ðIj�1

k Þ; f ðIjkÞ�Þrτ ðjZ2Þ orj¼ IMmax and obtain Ik; lastly, goto step 2.

2 The condition NcosparkðAÞ�K is used to ensure that AJj� 1kðkZ1; jZ1Þ is of full column rank. spark(A) is very difficult to determine

in practice, but we can use the inequalities rankðAÞþ1ZsparkðAÞZ1þ1=μðAÞ to give a rough estimator, where μðAÞ9max1r io jrnjAT

�;iA�;jj isthe mutual-coherence of A [1].

Page 4: A heuristic search algorithm for the multiple measurement vectors problem

X. Du et al. / Signal Processing 100 (2014) 1–84

Step 1.1: Randomly select Nc elements from S� Ij�1k and form them

as a subset Q.

Step 1.2: Denote Jj�1k ¼ Ij�1

k ⋃Q and obtain Xj�1k by (10).

Step 1.3: Generate a neighboring solution ~Ijk according to (11) and

compute f ð~I jkÞ.Step 1.4: Calculate Δj

k ¼ f ð~I jkÞ� f ðIj�1k Þ, accept ~I jk if pjkZη, where ηis a

random number uniformly chosen in ½0;1� andpjk ¼minð1; expð�Δj

k=TkÞÞ.Step 1.5: If ~I

jk is accepted in step 1.4, set Ijk ¼ ~I

jk; otherwise, set

Ijk ¼ Ij�1k .

Step 2: If f ðIkÞrε or k¼NAmax, set I ¼ Ik and stop; otherwise, gotostep 3.

Step 3: (annealing) Set k¼ kþ1 and Tk ¼ T1=logðkþ1Þ, then gotostep 1.

Output: I (an estimated solution for (8)); X (obtained using (6), anestimated solution for (4)).

From the procedure illustrated in Algorithm 2, it is easyto know that HSAMMV is also an SA algorithm. In practice,there are many techniques to improve the performance ofSA, such as the recording technique [28] and the reanneal-ing technique [29]. In the rest of this section, we talk aboutthese two techniques and apply them into HSAMMV.The recording technique is recording the best so far solut-ion (BSFS) [28] while the SA algorithm is running. ForHSAMMV, in the j-th inner-loop iteration at Tk, the BSFSIk;jBSF is defined as

Ik;jBSF ¼ arg minIj0k0

ff ðIj0k0Þg; 1rk0rk and 1r j0r j: ð12Þ

In the k-th outer-loop iteration of HSAMMV, if the number ofinner-loop iterations attains IMmax, we use the BSFS to set Ik(i.e., Ik ¼ Ik;IMmax

BSF ). The reannealing technique has been intro-duced in [29] for multivariate function minimization, and itsidea is to rescale the temperature under a certain condition.In the implementation of HSAMMV, when the number ofthe outer-loop iterations attains NAmax, we double the initialtemperature and implement HSAMMV again. In order toavoid extremely many reannealing operators, we give themaximum allowable number of reannealing operators,denoted by NRmax, to limit the runtime. In the implementa-tion of HSAMMV, NRmax often is set to be a relatively smallnatural number.

3 Code obtained through personal correspondence with authors.4 Code available at http://dsp.ucsd.edu/�zhilin/MFOCUSS.m.5 The OSMP (Orthogonal Subspace Matching Pursuit) algorithm is

used for recovering the partial support when using SA-MUSIC. Codeavailable at http://nuitblanche.blogspot.com/.

6 Code available at http://dsp.ucsd.edu/�zhilin/TMSBL_code.zip.

2.3. Computational complexity analysis

From Algorithm 2, we find that the main computatio-nal complexity of HSAMMV lies in computing the initialsolution and the iterations. The computational complexityfor computing the initial solution is about O(mnl). In eachinner-loop iteration, the computational complexity mainlylies in steps 1.2 and 1.3, and their computational complex-ities are respectively upper bounded by Oðm3ÞþOðm2lÞ andOðmK2ÞþOðmKlÞ. Noting that Krm, the computationalcomplexity of each inner-loop iteration is upper boundedby Oðm3ÞþOðm2lÞ. Obviously, the total number of inner-loopiterations is upper bounded by IMmaxNAmax. Therefore, thecomputational complexity of HSAMMV is upper bounded byOðmnlÞþOðm3IMmaxNAmaxÞþOðm2lIMmaxNAmaxÞ.

From the above analysis, we find that the computationalcomplexity of HSAMMV is mainly influenced by the totalnumber of the inner-loop iterations. Because a good initialsolution can speed up the searching process of SA and thenreduce the number of inner-loop iterations, we can use abetter solution (such as the solution obtained by M-OMPor SA-MUSIC) as the initial solution to reduce the computa-tional complexity of HSAMMV when the initial solutionobtained by the q-thresholding algorithm is not goodenough. It is easy to know that considerable computationalcomplexity of HSAMMV is derived from computing thepseudo-inverse, and it can be accelerated by the gradientscheme or the Cholesky decomposition technique [30].Moreover, as an SA algorithm, HSAMMV can be implemen-ted in parallel, which can reduce its runtime [31].

2.4. Connections between HSAMMV and existing works

In this subsection, we talk over the connections betweenthis work and some existing works. The initial solution ofHSAMMV is designed using the q-thresholding algorithm,then HSAMMV would be reduced to the q-thresholdingalgorithm when the initial solution satisfies the outer-loopstopping criterion. In our recent work [32], we proposeanother heuristic search algorithm named M-SISR (a SwarmIntelligence Sparse Recovery algorithm for MMV). Althoughthe update strategy of M-SISR is very similar to the generatingmechanism of HSAMMV, the two algorithms still are verydifferent: HSAMMV is proposed in the framework of SA,whereas M-SISR is designed using the ideas of the particleswarm optimization algorithm [33]. In [34], a hybrid SAthresholding algorithm (HSAT) is proposed to solve the sparserecovery problem. Similar to HSAMMV, HSAT is also an SAalgorithm. However, HSAMMV is very different from HSAT.There are two main differences between HSAMMV and HSAT.First, the addressed problems are different: HSAMMV dealswith MMV, whereas HSAT focuses on SMV, which can beviewed as a particular case of MMV when l¼1. Second, thegenerating mechanisms are very different: the new generatedrandom solution in HSAMMV is improved using pruningtechnique existed in SP and CoSaMP, whereas the newgenerated solution in HSAT is improved using the iterativehard thresholding algorithm [35] or the iterative soft thresh-olding algorithm [36].

3. Simulation results

In this section, we give several numerical simulations toevaluate the recovery performance of HSAMMV. The perfor-mance of HSAMMV is compared with those of l1�SVD3 [4],M-FOCUSS4 [8], M-OMP [9], SA-MUSIC5 [15] and T-MSBL6

[16]. The other MMV algorithms mentioned in Section 1 (suchas ReMBo, RA-ORMP, CS-MUSIC, RPMB, q-thresholding algo-rithm, AMPMMV, ZAPMMV) have been compared with one or

Page 5: A heuristic search algorithm for the multiple measurement vectors problem

5 10 15 200

0.2

0.4

0.6

0.8

1

Number of snapshots (l)

Exa

ct re

cove

ry ra

te

HSAMMVM−OMPL1−SVDSA−MUSICM−FOCUSSTMSBL

5 10 15 200

0.2

0.4

0.6

0.8

1

Number of snapshots (l)

Exa

ct re

cove

ry ra

te

HSAMMVM−OMPL1−SVDSA−MUSICM−FOCUSST−MSBL

Fig. 1. The exact recovery rates of HSAMMV, M-OMP, l1-SVD, SA-MUSIC, M-FOCUSS, and T-MSBL with l changing from 2 to 20 for fixed m¼20, n¼100 andK¼10 in the case of (a) noiseless, (b) SNR¼15 dB.

X. Du et al. / Signal Processing 100 (2014) 1–8 5

more of the compared algorithms, then we do not comparewith them any more in the simulations, the interested readeris referred to [10–14,17,18]. In all simulations, we have thefollowing settings and assumptions unless stated otherwise:

1.

tion

The entries in the measurement matrix A and thenonzero entries in the jointly sparse matrix X areindependently drawn from the standard normal distri-bution, and each column of A is normalized to have unitl2-norm. It is worthy to be noted that sparkðAÞ ¼mþ1generally holds in this case. The noise is additive i.i.d.Gaussian noise.

2.

We use the HSAMMV algorithm equipped with twoimproved techniques, and set T1 ¼ 10f ðI0Þ, IMmax ¼ 100,NAmax ¼ 50, NRmax ¼ 10, τ¼ 10�8, Nc ¼ ⌊0:7mc�7K=97;we set ε¼ 1:0e�5 and ε¼ 10�SNR=20 in the noiselesscase and the noisy case respectively, where SNR (Sig-nal-to-Noise Ratio) is defined as SNR¼ 10 log10 ‖AX‖2F=‖N‖2F . The pseudo-inverse in HSAMMV is computedusing the Cholesky decomposition technique.

3.

The sparsity level is supposed just to be the size ofthe row support of X (i.e., K ¼ jRðXÞj). The outputs ofl1-SVD, M-FOCUSS and T-MSBL are the estimators of X,we use the row index set of the estimator with the Klargest l2 norm as the row support estimator.

4.

An exact recovery is announced if the row support of Xis exactly recovered. All curves in the following figuresare obtained by averaging the results over 500 inde-pendent trials. Suppose the number of exact recoveriesin all 500 trials is NoE, then the exact recovery rate isdefined as NoE=500.

5.

All simulations are implemented using Matlab 7.11 on aPC with 3.2 GHz Intel Core i3 processor and 4.0GBmemory running the Windows XP system.

In the first simulation, for fixed m¼20, n¼100 andK¼10, we evaluate the recovery performances of the sixalgorithms with l changing from 2 to 20. The simulationresults in the cases of noiseless and noise with 15 dB areillustrated in Fig. 1(a) and (b), respectively. We see from

7 The sparsity level should satisfy Ko0:9 m when using this equa-to determine Nc.

Fig. 1 that HSAMMV performs best in the noiseless case,and that HSAMMV performs similar to T-MSBL in the caseof SNR¼15 dB.

In the second simulation, for fixed m¼20, n¼100 andl¼4, we evaluate the recovery performances of the sixalgorithms with K varying from 1 to 16. The simulationresults in the cases of noiseless and noise with 15 dB areillustrated in Fig. 2(a) and (b), respectively. We also seefrom Fig. 2 that HSAMMV performs best in the noiselesscase, and that HSAMMV performs similar to T-MSBL in thecase of SNR¼15 dB.

From Eq. (3), we know that lZ2 can ensure thatproblem (2) has a unique solution under the parametersetting of the first simulation. As illustrated in Fig. 1(a),we find that the exact recovery rate is 1 or very close to 1when lZ2. The same performance can be found in thesecond simulation from Fig. 2(a). This means HSAMMV canfind the exact solution with very high rate when problem(2) has a unique solution. That is, HSAMMV has strongglobal search ability.

Herein, we consider the runtime performances of thesix algorithms under the settings of the above two simu-lations. The average runtimes illustrated in Fig. 3(a) and (b)are corresponding to the results shown in Figs. 1(a) and 2(a), respectively. From Fig. 3, we find that HSAMMV per-forms slower than M-OMP, SA-MUSIC and M-FOCUSS, butfaster than l1-SVD and T-MSBL when the sparsity level isrelatively small.

As mentioned in Section 2.3, a good initial solution canreduce the searching time of HSAMMV. And many simula-tion results show that the smaller the sparsity level, thebetter (or more accurate) the initial solution. This roughanalysis can explain that HSAMMV performs relativelyfast when the sparsity level is relatively small. We alsofind from Fig. 3(b) that the runtime of HSAMMV increasesfast with the sparsity level increasing. And we can use theways introduced in Section 2.3 to reduce the runtimewhen the sparsity level is relatively larger.

In the third simulation, we consider the rank-defectivecase. For fixed m¼20, n¼100 and l¼ K ¼ 10, we study therecovery performances of the six algorithms with SNRranging from 10 dB to 50 dB in both cases of rankðXÞ ¼ 10and rankðXÞ ¼ 5. The simulation results are illustrated inFig. 4. We find from Fig. 4 that HSAMMV, M-FOCUSS and

Page 6: A heuristic search algorithm for the multiple measurement vectors problem

5 10 15 20

10−2

100

Number of snapshots (l)

Ave

rage

runt

ime

(Sec

.) HSAMMVM−OMPL1−SVDSA−MUSICM−FOCUSSTMSBL

5 10 15

10−2

100

Sparsity level (K)

Ave

rage

runt

ime

(Sec

.) HSAMMVM−OMPL1−SVDSA−MUSICM−FOCUSSTMSBL

Fig. 3. Average runtimes of HSAMMV, M-OMP, l1-SVD, SA-MUSIC, M-FOCUSS, and T-MSBL in the noiseless case for fixed (a) m¼20, n¼100 and K¼10,(b) m¼20, n¼100 and l¼4.

5 10 150

0.2

0.4

0.6

0.8

1

Sparsity level (K)

Exa

ct re

cove

ry ra

te

HSAMMVM−OMPL1−SVDSA−MUSICM−FOCUSSTMSBL

5 10 150

0.2

0.4

0.6

0.8

1

Sparsity level (K)

Exa

ct re

cove

ry ra

te

HSAMMVM−OMPL1−SVDSA−MUSICM−FOCUSST−MSBL

Fig. 2. The exact recovery rates of HSAMMV, M-OMP, l1-SVD, SA-MUSIC, M-FOCUSS, and T-MSBL with K varying from 1 to 16 for fixed m¼20, n¼100 andl¼4 in the case of (a) noiseless, (b) SNR¼15 dB.

10 20 30 40 500

0.2

0.4

0.6

0.8

1

SNR (dB)

Exa

ct re

cove

ry ra

te

HSAMMVM−OMPL1−SVDSA−MUSICM−FOCUSSTMSBL

10 20 30 40 500

0.2

0.4

0.6

0.8

1

SNR (dB)

Exa

ct re

cove

ry ra

te

HSAMMVM−OMPL1−SVDSA−MUSICM−FOCUSSTMSBL

Fig. 4. The exact recovery rates of HSAMMV, M-OMP, l1-SVD, SA-MUSIC, M-FOCUSS, and T-MSBL with SNR ranging from 10 dB to 50 dB for fixed m¼20,n¼100, l¼ K ¼ 10 and (a) rankðXÞ ¼ 10, (b) rankðXÞ ¼ 5.

8 Although M-OMP and SA-MUSIC can work with unknown K, we usethem with known K in the simulation.

X. Du et al. / Signal Processing 100 (2014) 1–86

T-MSBL (especially T-MSBL) perform very well in the fullrow rank case (i.e., rankðXÞ ¼ 10), and that only HSAMMVand T-MSBL (especially HSAMMV) perform very well inthe rank-defective case (i.e., rankðXÞ ¼ 5). This shows thatsimilar to T-MSBL, HSAMMV also is effective in the rank-defective case.

In the above simulations, we suppose the sparsity levelK ¼ jRðXÞj. But in practice, the sparsity level often is over-estimated (i.e., K4 jRðXÞj). In the last simulation, we studythe performance of HSAMMV in this case. Since M-OMP and

SA-MUSIC also suppose K is known as a prior,8 we compareHSAMMV only with them in this simulation. We set m¼20,n¼100, l¼4 and jRðXÞj ¼ 4, then study the performances ofthe three algorithms with K changing from 4 to 9 inboth cases of noiseless and SNR¼15 dB. The average errorsof the estimators and the average runtimes for the cases of

Page 7: A heuristic search algorithm for the multiple measurement vectors problem

Table 1Robust performance of HSAMMV to the sparsity level.

K Algorithm Error Time (s) K Algorithm Error Time (s) K Algorithm Error Time (s)

Performances in the noiseless case4 HSAMMV 7.69e�16 6.12e�3 5 HSAMMV 1.03e�15 4.32e�3 6 HSAMMV 1.25e�15 4.28e�3

M-OMP 0.0225 1.05e�3 M-OMP 0.0087 1.28e�3 M-OMP 0.097 1.52e�3SA-MUSIC 2.37e�16 4.38e�4 SA-MUSIC 3.52e�16 5.63e�4 SA-MUSIC 4.45e�16 7.64e�4

7 HSAMMV 1.74e�15 3.54e�3 8 HSAMMV 2.29e�15 3.95e�3 9 HSAMMV 2.87e�15 3.58e�3M-OMP 0.0067 1.78e�3 M-OMP 0.068 2.04e�3 M-OMP 0.0077 2.31e�3SA-MUSIC 5.31e�16 7.56e�4 SA-MUSIC 6.69e�16 1.10e�3 SA-MUSIC 8.13e�16 1.27e�3

Performances in the case of SNR¼15 dB4 HSAMMV 0.1141 0.0368 5 HSAMMV 0.1429 0.0292 6 HSAMMV 0.1696 0.0275

M-OMP 0.1115 1.18e�3 M-OMP 0.1309 1.39e�3 M-OMP 0.1560 1.68e�3SA-MUSIC 0.1693 5.02e�4 SA-MUSIC 0.1486 7.42e�4 SA-MUSIC 0.1547 9.31e�4

7 HSAMMV 0.2017 0.0240 8 HSAMMV 0.2200 0.0225 9 HSAMMV 0.2452 0.0179M-OMP 0.1748 1.96e�3 M-OMP 0.1826 2.21e�3 M-OMP 0.1977 2.49e�3SA-MUSIC 0.1722 1.14e�3 SA-MUSIC 0.1881 1.30e�3 SA-MUSIC 0.2093 1.47e�3

X. Du et al. / Signal Processing 100 (2014) 1–8 7

noiseless and SNR¼15 dB are shown in Table 1, where theaverage error is defined as 1

500∑500k ¼ 1‖X k�X‖F=‖X‖F and X k

is the estimated solution in the k-th trial. In the noiselesscase, we find from Table 1 that both SA-MUSIC andHSAMMV (especially SA-MUSIC) have very good recoveryperformances. In the case of SNR¼15 dB, we find fromTable 1 that HSAMMV has similar recovery performance toM-OMP and SA-MUSIC. In other word, the results illustratedin Table 1 show that HSAMMV has robust performance tothe sparsity level. That is, when the sparsity level is over-estimated, the performance of HSAMMV will not changemuch both in speed and in recovery accuracy.

4. Conclusion

In this paper, we considered the MMV problem andproposed a novel heuristic search algorithm named HSAMMVbased on SA and some CS algorithms. HSAMMV is an SAalgorithm, its initial solution is determined by the q-thresh-olding (qZ1) algorithm, and its generating mechanism isdesigned using the pruning technique existed in CoSaMP andSP. The simulation results illustrate that HSAMMV has strongglobal search ability and performs quite well in exactrecovery rate, and that HSAMMV is effective in the rank-defective case and robust to the sparsity level. Rigorousperformance analysis of HSAMMV is lacked, and we shalldevelop it in the future work.

Acknowledgments

The work was supported in part by the National NaturalScience Foundation of China under Grants 61271014and 61201328. We are appreciative for the anonymousreviewers’ valuable comments, with which great improve-ments have been made in the paper.

References

[1] D.L. Donoho, M. Elad, Optimally sparse representation in general(nonorthogonal) dictionaries via l1 minimization, Proc. Natl. Acad.Sci. USA 100 (5) (2003) 2197–2202.

[2] E.J. Candès, T. Tao, Decoding by linear programming, IEEE Trans. Inf.Theory 51 (2005) 4203–4215.

[3] D.L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory 52 (4)(2006) 1289–1306.

[4] D. Malioutov, M. Cetin, A.S. Willsky, A sparse signal reconstructionperspective for source localization with sensor arrays, IEEE Trans.Signal Process. 53 (8) (2005) 3010–3022.

[5] X. Du, L. Cheng, Three stochastic measurement schemes fordirection-of-arrival estimation using compressed sensing method,Multidimens. Syst. Signal Process., 2013, http://dx.doi.org/10.1007/s11045-012-0220-5.

[6] X. Du, D. Chen, L. Cheng, A reduced l2� l1 model with an alternatingminimisation algorithm for support recovery of multiple measure-ment vectors, IET Signal Process. 7 (2) (2013) 112–119.

[7] A. Rakotomamonjy, R. Flamary, G. Gasso, S. Canu, lp� lq penalty forsparse linear and sparse multiple kernel multitask learning, IEEETrans. Neural Netw. 22 (1) (2011) 1307–1320.

[8] S.F. Cotter, B.D. Rao, K. Engan, K. Kreutz-Delgado, Sparse solution tolinear inverse problems with multiple measurement vectors, IEEETrans. Signal Process. 53 (7) (2005) 2477–2488.

[9] J. Chen, X.M. Huo, Theoretical results on sparse representations ofmultiple-measurement vectors, IEEE Trans. Signal Process. 54 (12)(2006) 4634–4643.

[10] M. Mishali, Y.C. Eldar, Reduce and Boost: recovering arbitrary sets ofjointly sparse vectors, IEEE Trans. Signal Process. 56 (10) (2008)4692–4701.

[11] M.E. Davies, Y.C. Eldar, Rank awareness for joint sparse recovery,IEEE Trans. Inf. Theory 58 (2) (2012) 1135–1146.

[12] J.M. Kim, O.K. Lee, J.C. Ye, Compressive MUSIC: revisiting the linkbetween compressive sensing and array signal processing, IEEETrans. Inf. Theory 58 (1) (2012) 278–301.

[13] J. Gai, P. Fu, Z. Li, J. Qiao, Signal recovery from multiple measurementvectors via tunable random projection and boost, Signal Process. 92(2012) 2901–2908.

[14] R. Gribonval, H. Rauhut, K. Schnass, P. Vandergheynst, Atoms of allchannels, unite! Average case analysis of multi-channel sparse recoveryusing greedy algorithms, J. Fourier Anal. Appl. 14 (5) (2008) 655–687.

[15] K. Lee, Y. Bresler, M. Junge, Subspace methods for joint sparserecovery, IEEE Trans. Inf. Theory 58 (6) (2012) 3613–3641.

[16] Z. Zhang, B.D. Rao, Sparse signal recovery with temporally correlatedsource vectors using sparse Bayesian learning, IEEE J. Sel. TopicsSignal Process. 5 (5) (2013) 912–926.

[17] J. Ziniel, P. Schniter, Efficient high-dimensional inference in themultiple measurement vector problem, IEEE Trans. Signal Process.61 (2) (2013) 340–354.

[18] Y. You, L. Chen, Y. Gu, W. Feng, H. Dai, Retrieval of sparse solutions ofmultiple-measurement vectors via zero-point attracting projection,Signal Process. 92 (2012) 3075–3079.

[19] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulatedannealing, Science 220 (1983) 671–680.

[20] W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signalreconstruction, IEEE Trans. Inf. Theory 55 (5) (2009) 2230–2249.

[21] D. Needell, J.A. Tropp, CoSaMP: iterative signal recovery fromincomplete and inaccurate samples, Appl. Comput. Harmon. Anal.26 (3) (2009) 301–321.

Page 8: A heuristic search algorithm for the multiple measurement vectors problem

X. Du et al. / Signal Processing 100 (2014) 1–88

[22] D. Bertsimas, J. Tsitsiklis, Simulated annealing, Stat. Sci. 8 (1) (1993)10–15.

[23] B. Hajek, Cooling Schedules for optimal annealing, Math. Oper. Res.13 (2) (1998) 311–329.

[24] S. Geman, D. Geman, Stochastic relaxation, Gibbs distribution, andthe Bayesian restoration of images, IEEE Trans. Pattern Anal. Mach.Intell. 6 (1984) 721–741.

[25] J.W. Pepper, B.L. Golden, E.A. Wasil, Solving the traveling sales-man problem with annealing-based heuristics: a computationalstudy, IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 32 (1) (2002)72–77.

[26] I. Osman, C. Potts, Simulated annealing for permutation flow-shopscheduling, Omega 17 (6) (1989) 551–557.

[27] R.W. Eglese, Simulated annealing: a tool for operational research,Eur. J. Oper. Res. 46 (3) (1990) 271–281.

[28] G. Ruppeiner, J.M. Pedersen, P. Salamon, Ensemble approach tosimulated annealing, J. Phys. I 1 (1991) 455–470.

[29] L. Ingber, B. Rosen, Genetic algorithms and very fast simulated reanneal-ing: a comparison, Math. Comput. Model. 16 (11) (1992) 87–100.

[30] G.H. Golub, C.F. VanLoan, Matrix Computations, 3rd ed. The JohnsHopkins University Press, Baltimore and London, 2007.

[31] D.J. Ram, T.H. Sreenivas, K.G. Subramaniam, Parallel simulatedannealing algorithms, J. Parallel Distrib. Comput. 37 (1996) 207–212.

[32] X. Du, L. Cheng, L. Liu, A swarm intelligence algorithm for jointsparse recovery, IEEE Signal Process. Lett. 20 (6) (2013) 611–614.

[33] R. Poli, J. Kennedy, T. Blackwell, Particle swarm optimization: anoverview, Swarm Intell. 1 (2007) 33–57.

[34] F. Xu, S. Wang, A hybrid simulated annealing thresholding algorithmfor compressed sensing, Signal Process. 93 (6) (2013) 1577–1585.

[35] T. Blumensath, M.E. Davies, Iterative thresholding for sparse approx-imations, J. Fourier Anal. Appl. 14 (2008) 629–654.

[36] Z. Xu, X. Chang, F. Xu, H. Zhang, L1=2 regularization: a thresholdingrepresentation theory and a fast solver, IEEE Trans. Neural Netw.Learn. Syst. 23 (7) (2012) 1013–1027.