11
Research Article Improved Generalized Sparsity Adaptive Matching Pursuit Algorithm Based on Compressive Sensing Zhao Liquan , 1 Ma Ke, 1 and Jia Yanfei 2 1 Key Laboratory of Modern Power System Simulation and Control & Renewable Energy Technology, Ministry of Education (Northeast Electric Power University), Jilin, China 2 College of Electrical and Information Engineering, Beihua University, Jilin 132013, China Correspondence should be addressed to Zhao Liquan; [email protected] Received 4 July 2019; Accepted 20 March 2020; Published 10 April 2020 Academic Editor: Sos S. Agaian Copyright © 2020 Zhao Liquan et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e modified adaptive orthogonal matching pursuit algorithm has a lower convergence speed. To overcome this problem, an improved method with faster convergence speed is proposed. In respect of atomic selection, the proposed method computes the correlation between the measurement matrix and residual and then selects the atoms most related to residual to construct the candidate atomic set. e number of selected atoms is the integral multiple of initial step size. In respect of sparsity estimation, the proposed method introduces the exponential function to sparsity estimation. It uses a larger step size to estimate sparsity at the beginning of iteration to accelerate the algorithm convergence speed and a smaller step size to improve the reconstruction accuracy. Simulations show that the proposed method has better performance in terms of convergence speed and reconstruction accuracy for one-dimension signal and two-dimension signal. 1. Introduction Compressed sensing (CS) [1] has become a popular research topic in recent years. Compared with traditional compres- sion methods, the CS can be sampled at a rate far below the Nyquist sampling theorem, and the signal can be recon- structed with high probability. It can be used to reduce the amount of data transferred. Compressed sensing method has been applied in the context of medical imaging, radar im- aging, and video transmission[2–5]. Signal reconstruction is one of the most important parts of compressed sensing. A good reconstruction algorithm can improve the accuracy and time of signal recovery. In the design of the reconstruction algorithm, the signal recon- struction based on the l 2 -norm optimization is firstly adopted. However, the reconstructed signal obtained by l 2 -norm optimization is not sparse, and the reconstruction accuracy is large. erefore, many researchers pay more attention to the use of the optimization algorithms based on l 1 -norm or l 0 -norm to reconstruct sparse signal in com- pressed sensing. e sparse signal reconstruction methods based on l 1 -norm include the basis pursuit (BP) method [6], iterative thresholding (IT) method [7], and homotopy method [8]. Some researchers also proposed the sparse signal recovery method with minimization of l 1 -norm minus l 2 -norm [9]. e matching pursuit algorithm is also a reconstruction algorithm based on l 0 -norm. Compared with convex opti- mization algorithm, it has lower computational complexity. erefore, it is the most commonly used algorithm. e orthogonal matching pursuit algorithm (OMP) [10] is the earliest matching pursuit algorithm. On the basis of OMP, regularized orthogonal matching pursuit (ROMP) [11] is proposed using regularized rule to refine the selected col- umns of the measurement matrix. Researchers also propose the generalized orthogonal matching pursuit (gOMP) al- gorithm [12]. e difference between the OMP and gOMP algorithm is that the OMP algorithm selects one atom with the highest correlation in each iteration and the gOMP algorithm selects the most relevant atom for reconstruction, S(S > 1). erefore, the gOMP reduces the running time. However, if the gOMP algorithm selects the atoms which do Hindawi Journal of Electrical and Computer Engineering Volume 2020, Article ID 2782149, 11 pages https://doi.org/10.1155/2020/2782149

ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

Research ArticleImproved Generalized Sparsity Adaptive Matching PursuitAlgorithm Based on Compressive Sensing

Zhao Liquan 1 Ma Ke1 and Jia Yanfei 2

1Key Laboratory of Modern Power System Simulation and Control amp Renewable Energy TechnologyMinistry of Education (Northeast Electric Power University) Jilin China2College of Electrical and Information Engineering Beihua University Jilin 132013 China

Correspondence should be addressed to Zhao Liquan zhao_liquan163com

Received 4 July 2019 Accepted 20 March 2020 Published 10 April 2020

Academic Editor Sos S Agaian

Copyright copy 2020 Zhao Liquan et al is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

e modified adaptive orthogonal matching pursuit algorithm has a lower convergence speed To overcome this problem animproved method with faster convergence speed is proposed In respect of atomic selection the proposed method computes thecorrelation between the measurement matrix and residual and then selects the atoms most related to residual to construct thecandidate atomic sete number of selected atoms is the integral multiple of initial step size In respect of sparsity estimation theproposed method introduces the exponential function to sparsity estimation It uses a larger step size to estimate sparsity at thebeginning of iteration to accelerate the algorithm convergence speed and a smaller step size to improve the reconstructionaccuracy Simulations show that the proposed method has better performance in terms of convergence speed and reconstructionaccuracy for one-dimension signal and two-dimension signal

1 Introduction

Compressed sensing (CS) [1] has become a popular researchtopic in recent years Compared with traditional compres-sion methods the CS can be sampled at a rate far below theNyquist sampling theorem and the signal can be recon-structed with high probability It can be used to reduce theamount of data transferred Compressed sensingmethod hasbeen applied in the context of medical imaging radar im-aging and video transmission[2ndash5]

Signal reconstruction is one of the most important partsof compressed sensing A good reconstruction algorithm canimprove the accuracy and time of signal recovery In thedesign of the reconstruction algorithm the signal recon-struction based on the l2-norm optimization is firstlyadopted However the reconstructed signal obtained byl2-norm optimization is not sparse and the reconstructionaccuracy is large erefore many researchers pay moreattention to the use of the optimization algorithms based onl1-norm or l0-norm to reconstruct sparse signal in com-pressed sensing e sparse signal reconstruction methods

based on l1-norm include the basis pursuit (BP) method [6]iterative thresholding (IT) method [7] and homotopymethod [8] Some researchers also proposed the sparsesignal recovery method withminimization of l1-normminusl2-norm [9]

e matching pursuit algorithm is also a reconstructionalgorithm based on l0-norm Compared with convex opti-mization algorithm it has lower computational complexityerefore it is the most commonly used algorithm eorthogonal matching pursuit algorithm (OMP) [10] is theearliest matching pursuit algorithm On the basis of OMPregularized orthogonal matching pursuit (ROMP) [11] isproposed using regularized rule to refine the selected col-umns of the measurement matrix Researchers also proposethe generalized orthogonal matching pursuit (gOMP) al-gorithm [12] e difference between the OMP and gOMPalgorithm is that the OMP algorithm selects one atom withthe highest correlation in each iteration and the gOMPalgorithm selects the most relevant atom for reconstructionS(Sgt 1) erefore the gOMP reduces the running timeHowever if the gOMP algorithm selects the atoms which do

HindawiJournal of Electrical and Computer EngineeringVolume 2020 Article ID 2782149 11 pageshttpsdoiorg10115520202782149

not contain the signal information the gOMP algorithmcannot delete these atoms in the following iteration isaffects the reconstruction performance erefore the re-searchers propose compressed sampling matching pursuit(CoSaMP) [13] and subspace pursuit (SP) [14] Both of themuse backtracking strategy to select the most relevant atomsfirstly and then check the atomic correlation again to removeunrelated atoms in the end us they can improve thereconstruction accuracy Besides block orthogonal match-ing pursuit algorithm (BOMP) is also proposed for blocksparse signal [15 16] and sharp sufficient conditions forstable recovery are also given [17] In this paper we mainlyconsider the normal sparse signal

However these algorithms have the common limitationthat sparsity information needs be known but the sparsity isoften unknown in the practical application To solve thisproblem ong proposes the sparsity adaptive matchingpursuit (SAMP) [18] that can recover signals withoutknowing the sparsity value It firstly sets a small estimatedsparsity value and then uses a fixed step size to make theestimated sparsity value increase after each iteration andfinally approach the true sparsity value to reconstruct thesignal However the fixed step size may cause the estimatedsparsity value to be inaccurate and affect the accuracy of thereconstruction To overcome the problem the modifiedadaptive orthogonal matching pursuit algorithm (MAOMP)[19] was proposed It used a smaller factor less than 1 tomodify the step size and made the step size become smallerwith the increase of iterationse factor used inMAOMP isfixed If the initial step size is too small it requires a largenumber of iterations is will affect the convergence speedof MAOMP algorithm

To improve the convergence speed of MAOMP algo-rithm we use a nonlinear function to modify the factor usedinMAOMP algorithme factor is variable in our proposedmethode factor is larger at the beginning of iteration andgradually decreases with the increase of iteration numberis can accelerate the convergence speed Besides thegeneralized atom selection strategy is also used to select theatoms that are most related to residual for constructing thecandidateatomic set and thus improving the accuracy ofreconstruction

2 Compressed Sensing Theory

e basic assumption of compressed sensing is that thesignal is sparse If a signal s with length N has K nonzerovalues K≪N it is called K sparse signal In some realapplications the signal s is not sparse so the signal s is notcompressible In order to make the original signal sparse thesparse basis matrix Ψ φ1φ2 φN1113864 1113865 is used It can beexpressed as

s 1113944N

i1φiαi Ψα (1)

where α is the sparse signal with K nonzero values (K≪N)e sparse basis methods include fast Fourier transformbasis discrete Fourier transform basis wavelet transform

basis and redundant dictionary In order to compress theobserved signal s a measurement matrix Φ is designed todeal with the signal s e compressed signal is

y Φs (2)

where Φ isin RMtimesN y isin Rmtimes1 and M≪N e length ofobserved signal is N and the length of compressed signal isM Mixed with (1) (2) can be expressed as

y ΦΨα Aα (3)

where A ΦΨ is a matrix (M times N) called sensing matrixe compressed signal measurement matrix and sparse

basis are known e aim of reconstruction is to recover thesparse signal and original signal under the above unknowninformation erefore the number of sensing matrix rowsis smaller than the number of columns in (3) It cannot besolved by traditional method In the compressed sensingmethod the l0 minimum norm method is used to solve (3)at is

s

argmin α0

st y Aα(4)

Some researchers also proposed to use l1 minimumnorm method to solve (3) at is

s

argmin α1

st y Aα(5)

3 MAOMP Algorithm

e SAMP algorithm uses a fixed step size method to es-timate the sparsity which is expressed in as follows

k k + 1 (6)

L k times s (7)

where k is the iteration number L is the estimated sparsityand s is fixed step size From (6) and (7) the estimatedsparsity becomes larger with the increasing iterationnumber but the step size is fixed erefore the estimatedsparsity is often insufficient or overestimated which hasnegative influence on signal reconstruction accuracy Inorder to solve this question MAOMP algorithm was pro-posed e MAOMP algorithm uses (8) (9) and (10) toestimate sparsity

ATy

2 lt

1 minus δs1 + δs

1113968 y2

L0 L0 + 1

(8)

sk+1 lceilβ times skrceil (9)

Lk+1 Lk + sk+1 (10)

where δs is a constant value between 0 and 1 A is the sensingmatrix y is compressed vector k is the number of iterations

2 Journal of Electrical and Computer Engineering

Lk is the estimated sparsity β is also a constant value be-tween 0 and 1 and lceilarceil denotes the smallest integer that isnot smaller than a If (8) is satisfied the initial sparsity will beincreased by 1 If (8) is not satisfied MAOMP will use (9)and (10) to continue estimating the sparsity e variablestep method can be expressed as (9) and (10) In (9) and (10)it can be seen that the step size is gradually reduced to 1 withthe increase of iteration number is can make the sparsityvalue more accurate so the algorithm can improve recon-struction accuracye detailed steps of MAOMP are shownin Algorithm 1

4 Proposed Method

Although theMAOMPuses (8) tomodify the initial sparsity toreduce the iteration number of sparsity estimation it alsoincreases the computational complexity of sparsity initialestimation Besides if the initial step size is smaller the stepsize based on (9) will be rapidly reduced to 1 When the initialsparsity is far away from the real sparsity it will costmuch timeto converge And in [19] the researcher has shown that whenthe real sparsity is relatively larger whatever the value of δs isthe optimal result of the estimated initial sparsity is only abouthalf of the real sparsity value ere is still a large distancebetween the estimated initial sparsity and the real sparsityerefore the algorithm may require more iterations to makethe estimated sparsity value approach the real sparsity value

To overcome these problems in MAOMP we improve theMAOMPmethod in terms of estimating sparsity and selectingatom Firstly we directly set the initial sparsity as 1 and use anonlinear function to adjust the step size to make it have largervalue at the beginning phase of iteration process and smallervalue with the increasing of iteration It is expressed as follows

sk+1 lceilsk timesk

a1113888 1113889

minus krceil (11)

Lk+1 Lk + sk+1 (12)

where k is the number of iterations sk is the step size a isconstant that is larger than 1 lceilarceil denotes the smallest integerthat is not smaller than a and Lk is the estimated sparsityAccording to (11) at the beginning of iteration the step size islarger As the number of iterations increases the step sizebecomes smaller and gradually decreases to 1is means thatif the distance between the real sparsity and initial sparsity islarger the proposed method can quickly make the estimatedsparsity close to the real sparsity to reduce the consumptiontime for estimating the sparsity As seen from (11) with in-creasing the number of iterations the distance between the realsparsity and estimated sparsity is close and we use a smallerstep size to adjust the estimated sparsity to prevent the esti-mated sparsity from being insufficient or overestimated iscan make the sparsity estimation more accurate and faster

e value of nonlinear function (ka)minus k is shown inFigure 1 where k is the number of iterations and a 2 FromFigure 1 we can see that the function has faster descentspeed for smaller iteration and slower descent speed for

larger iteration is will make the estimated sparsity movetoward the real sparsity faster at the beginning phase ofiteration process and move toward the real sparsity slower atthe end phase of iteration process erefore it makes theproposed method have faster convergence speed and lowerreconstruction error However if a is too large the step sizewill be also very large at the beginning phase of iterationprocessis can lead to overestimation of the sparsity valuethus affecting the accuracy of the algorithm

Secondly we use the generalized orthogonal matchingpursuit to select the atoms according to the correlation be-tween the sensingmatrix and residual vector It is expressed as

InputSensing matrix Amtimesn

Observation vector ymtimes1

Initial step size s0Constant parameter βStop threshold εInitialization parameter

1113954x 0initialize signal approximationk 0loop indexL0 1initial sparsity estimateS0 empty empty preliminary index setC0 emptyempty candidate index setF0 emptyempty support index setr0 yresidual vectordone 0 while loop flag

While (simdone)(1) Compute the projective set

S0 max(|ATy| L0)

(2) Compute the estimated sparsityIf AT

S0y2lt 1 minus δs

1 + δs

1113968y2

en L0 L0 + 1 return on step (1)Else L1 L0 c0 y minus As0

A+

s0y k 1 return on step (3)

(3) Compute a new projective setSk max(|ATrkminus1| Lk)

(4) Merge to update the candidate index setCk Fkminus1cup Sk

(5) Get the estimate signal value by least squares algorithmx

Ck A+

Cky (AT

CkACk

)minus 1ATCk

y

(6) Prune to obtain current support index setF max(|1113954xCk

| Lk)

(7) Update signal final estimatex

k (ATFAF)minus 1AT

Fy

residual error rc y minus AFx

k

(8) check the iteration conditionIf rc2le ε

done 1 quit iterationelse if rc2le rkminus12sk+1 1113862β times sk1113863

Lk+1 Lk + sk+1k k + 1elseFk F

rk rc

endendOutput 1113954x xk( s-sparse approximation of signal x)

ALGORITHM 1 MAOMP algorithm

Journal of Electrical and Computer Engineering 3

Sk max ATrkminus1

11138681113868111386811138681113868

11138681113868111386811138681113868 num1113874 1113875 (13)

where Sk is projective set num is the fixed number to selectatomics A is sensing matrix and rk is residual errorCompared with MAOMP our proposed method has thenumber of selected atomics num fixed at each iteration isreduces the algorithm complexity e more the correlationatomics are selected to extend the candidate atomic set thebetter the accuracy is However if the number of selectedatomics is larger it will contain some atomics with lowercorrelation and reduce algorithm accuracy How to choose asuitable num is discussed in the simulation

e proposed algorithm begins to select atoms using thegeneralized orthogonal matching pursuit method thenupdates the step size and estimates sparsity using the newvariable step method e detailed steps of the proposedmethod are shown in Algorithm 2

5 Results and Discussion

51 Parameters Selection e selection of parameter a andthe number of selected atomics num directly affect the al-gorithm performance In this section we use the sourcesignal as a Gaussian signal with length N 256 and mea-surement value M 128 And the stop iteration parameterε 10minus 6 erefore we firstly set the num as 25 and the a asvariable to search the optimal a e relationship betweenthe a and reconstruction probability is shown in Figure 2From Figure 2(a) we can see that the reconstruction prob-ability decreases with the increasing of sparsity level Whenthe sparsity level is between 40 and 45 the reconstructionprobability for the proposed method with a 2 is the highestnext is a 3 When the sparsity level is greater than 50 thereconstruction probability for the proposed method with a

3 is the highest next is a 2 Because the large step size leadsto overestimation the reconstruction probability drops rap-idly at K 30 when a 4 And when a 32 the recon-struction probability is not as good as a 3 Based on theabove analysis the reconstruction probability for a 3 ishigher than the other values for the larger sparsity levelHowever the reconstruction probability becomes poor when

agt 3us we select the value of a between 2 and 3 as shownin Figure 2(b) From Figure 2(b) the reconstruction proba-bility for the proposed method with a 25 is the highestvalue us we select a 25 for the experiments

At the next experiment we set the a as constant and thenumber of selected atomics num as variable e result can beseen in Figures 3 and 4 From Figure 3 the initial step size is 5and we can see that as the sparsity level increases the recon-struction probability of proposed method for num 10 is thelowest compared with the other valuesWhen the sparsity level isbetween 45 and 50 the reconstruction probability for the pro-posed method with num 30 is the highest next is num 20When the sparsity level is greater than 55 the reconstructionprobability of proposed method for num 20 is the highestMoreover from Figure 4 when initial step size is 10 we can seethat the reconstruction probability of proposed method fornum 20 is the lowest compared with the other values Fur-thermore the reconstruction probability of proposedmethod fornum 40 is the highest for all the sparsity levels ComparingFigures 3 and 4 we can conclude that when num is four times theinitial step size the reconstruction quality is the best

Based on the above analysis we set the number of se-lected atomics as four times the initial step size and theparameter a as 25 in the flowing experiments e exper-iment conditions are as follows the CPU is Intelreg Coretrade i5-8300H at 230GHz and the size of RAM is 8GB

52 One-Dimensional Signal Reconstruction In this sectionwe use one-dimensional signal as source signal to test thereconstruction performance of different methods (SAMPalgorithm MAOMP algorithm and our proposed method)e source signal is a Gaussian signal with length N 256and sparsity level K 40 And the stop iteration parameterε 10minus 6e initial step size of all methods is set as 5 and 10As known in [19] in MAOMP algorithm when the pa-rameter δs is 02 and β is 06 the result of signal recon-struction is the best erefore we select the parameter δs as02 and β as 06 in these experiments In our proposed al-gorithm the parameter a 25 and num are four times theinitial step size Figure 5 shows the reconstruction proba-bility of signals under different measurement values When

(ka)minus

k

15 2 25 3 35 4 45 5 55 61k

0020406081

121416182

Figure 1 e value of (ka)minus k increases with k where a 2

4 Journal of Electrical and Computer Engineering

the measurement value increases the reconstructionprobability becomes higher Our proposed method is sig-nificantly better than the other two algorithms When themeasurement value is 70 the algorithm we proposed canreconstruct the signal but the reconstruction probability ofthe other two algorithms is still 0 Moreover whatever themeasurement value is our proposed algorithm has higherreconstruction probability than the other two algorithmsis means that the proposed method has higher recon-struction probability under different measurement values

Next when comparing the reconstruction probability ofthe algorithm under different sparsity levels the fixedmeasurement value M 128 and the other experimentalconditions are the same as the previous experiment Figure 6shows the reconstruction probability with different sparsitylevels

It can be seen from Figure 6 that the reconstructionprobability decreases with the increasing of sparsity levelWhen 45leKle 50 the SAMP with s 5 begins to declinebut other algorithms still maintain almost 100 recon-struction probability When 50leKle 70 all algorithm re-construction probability values begin to decline Inparticular in K 65 the reconstruction probability ofSAMP and MAOMP in which the initial step size is 10dropped to 0 However the reconstruction probability of ourproposed method is higher than that of the other two al-gorithms And when the sparsity level is 70 the proposedmethod with s 10 still can successfully reconstruct thesignal with probability 3754 but the reconstructionprobability of the other two algorithms has dropped to 0is shows that the proposed method has higher recon-struction probability under different sparsity levels By

InputSensing matrix Amtimesn

Observation vector ymtimes1

Constant parameter a

Initial step size s0e number of atoms selected each time numTolerance used to exit loop ε

Initialize parameter1113954x 0 initialize signal approximationk 1 loop indexL0 1 initial sparsity estimatedone 0 while loop flagS0 empty empty preliminary index setC0 empty empty candidate index setF0 empty empty support index set

While (simdone)(1) Compute the projective set

S0 max(|ATrkminus1| num)

(2) Merge to update the candidate index setCk Fkminus1cup Sk

(3) Get the estimate signal value and residual error by least squares algorithmx

Ck A+

Cky (AT

CkACk

)minus 1ATCk

y

rk y minus ACkx

Ck

(4) prune to obtain current support index setF max(|1113954xCk

| Lk)

(5) update signal final estimate by least squares algorithm and compute residual errorx

k (ATFAF)minus 1AT

Fy

residual error rc y minus AFx

k

(6) Check the iteration conditionIf rc2 le ε

done 1 quit iterationelse if rc2le rkminus12k k + 1sk+1 1113964sk times (ka)minus k

1113965

Lk+1 Lk + sk+1elseFk F

rk rc

endEndOutput1113954x xk(s-sparse approximation of signal x)

ALGORITHM 2 Proposed algorithm

Journal of Electrical and Computer Engineering 5

comparing the one-dimensional signal with different mea-surement values and different sparsity levels it can be seenthat our proposed method has obvious advantage in one-dimensional signal reconstruction

53 Two-Dimensional Image Reconstruction In this sectionwe use a grayscale image of size 256 times 256 called Lena assource signal to test the reconstruction performance ofdifferent methods (SAMP algorithm MAOMP algo-rithm and our proposed method) e wavelet basis is

used as the sparse basis to sparse the image e initialstep sizes of the three algorithms are 1 5 and 10 re-spectively e stop iteration parameter ε 10minus 6 the β ofthe MAOMP algorithm is 06 and δs takes 02 In ourproposed method parameter a 25 and num are fourtimes the initial step size Figure 7 shows the two-di-mensional signal Lena that is reconstructed by ourproposed method SAMP method and MAOMP methodwith sampling rate 06 We use the peak signal-to-noiseratio (PSNR) to represent the quality of image recon-struction and it can be expressed as

20 25 30 35 40 45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 1a = 2a = 3

a = 32a = 4

(a)

Sparsity level K40 45 50 55 60 65

0102030405060708090

100

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 28a = 29a = 3

a = 21a = 22a = 23a = 24

a = 25a = 26a = 27

(b)

Figure 2 e effect of different a on reconstruction probability under different sparsity levels

45 50 55 60 65 7020

30

40

50

60

70

80

90

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 20num = 30num = 40

num = 50num = 60

Figure 4 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 10

40 45 50 55 60 650

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 10num = 15num = 20

num = 25num = 30

Figure 3 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 5

6 Journal of Electrical and Computer Engineering

MSE 1

M times N1113944

Mminus1

i01113944

Nminus1

j0|1113954x(i j) minus x(i j)|

2

PSNR 10 times log102n minus 1( )2

MSE1113888 1113889

(14)

where M N 256 1113954x(i j) represents the reconstructedvalue of the corresponding position of the test imageand x(i j) represents the original value of the corre-sponding position of the test picture MSE representsmean squared error MAX1113954x represents the maximumvalue of the image e size of the image is 256 because28 256 so that the sampling point is 8 bits and the

value of n is 8 e larger the PSNR value the better theimage quality

Figures 7 to 9 show the original signal and the recon-structed signals by SAMP MAOMP and the proposedmethod with initial step sizes of 1 5 and 10 respectively Itcan be seen from Figures 7 and 8 that the PSNR of SAMPand MAOMP decreases when the initial step size becomeslarger e SAMP cannot reconstruct signal when the initialstep size is 5 and 10 e PSNR is also lower for MAOMPalgorithm when the initial step size is 10 is is because alarge initial step size causes the sparsity value to be over-estimated and affects accuracy However in Figure 9 ourproposedmethod uses (11) to adjust the step size to make thestep size gradually decreaseis can prevent overestimation

70 75 80 85 90 95 100 105 110 115 1200

102030405060708090

100

No of measurements

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the measurementsK(M = 40 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 5 e reconstruction probability of signals under different measurement values

45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 6 e reconstruction probability of the algorithm with different sparsity levels

Journal of Electrical and Computer Engineering 7

Original image

(a)

SAMP s = 1 PSNR = 282053

(b)

SAMP s = 5 PSNR = 72404

(c)

SAMP s = 10 PSNR = 72404

(d)

Figure 7 e PSNR value with different initial step sizes in SAMP (a) e original image (b) SAMP when initial step size is 1 (c) SAMPwhen initial step size is 5 (d) SAMP when initial step size is 10

Original image

(a)

MAOMP s = 1 PSNR = 288606

(b)

Figure 8 Continued

8 Journal of Electrical and Computer Engineering

MAOMP s = 5 PSNR = 163367

(c)

MAOMP s = 10 PSNR = 116794

(d)

Figure 8 e PSNR value with different initial step sizes in MAOMP (a) e original image (b) MAOMP when initial step size is 1 (c)MAOMP when initial step size is 5 (d) MAOMP when initial step size is 10

Original image

(a)

Proposed s = 1 PSNR = 291934

(b)

Proposed s = 5 PSNR = 292986

(c)

Proposed s = 10 PSNR = 298454

(d)

Figure 9 e PSNR value with different initial step sizes in the proposed method (a) e original image (b) the proposed method wheninitial step size is 1 (c) the proposed method when initial step size is 5 (d) the proposed method when initial step size is 10

Journal of Electrical and Computer Engineering 9

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 2: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

not contain the signal information the gOMP algorithmcannot delete these atoms in the following iteration isaffects the reconstruction performance erefore the re-searchers propose compressed sampling matching pursuit(CoSaMP) [13] and subspace pursuit (SP) [14] Both of themuse backtracking strategy to select the most relevant atomsfirstly and then check the atomic correlation again to removeunrelated atoms in the end us they can improve thereconstruction accuracy Besides block orthogonal match-ing pursuit algorithm (BOMP) is also proposed for blocksparse signal [15 16] and sharp sufficient conditions forstable recovery are also given [17] In this paper we mainlyconsider the normal sparse signal

However these algorithms have the common limitationthat sparsity information needs be known but the sparsity isoften unknown in the practical application To solve thisproblem ong proposes the sparsity adaptive matchingpursuit (SAMP) [18] that can recover signals withoutknowing the sparsity value It firstly sets a small estimatedsparsity value and then uses a fixed step size to make theestimated sparsity value increase after each iteration andfinally approach the true sparsity value to reconstruct thesignal However the fixed step size may cause the estimatedsparsity value to be inaccurate and affect the accuracy of thereconstruction To overcome the problem the modifiedadaptive orthogonal matching pursuit algorithm (MAOMP)[19] was proposed It used a smaller factor less than 1 tomodify the step size and made the step size become smallerwith the increase of iterationse factor used inMAOMP isfixed If the initial step size is too small it requires a largenumber of iterations is will affect the convergence speedof MAOMP algorithm

To improve the convergence speed of MAOMP algo-rithm we use a nonlinear function to modify the factor usedinMAOMP algorithme factor is variable in our proposedmethode factor is larger at the beginning of iteration andgradually decreases with the increase of iteration numberis can accelerate the convergence speed Besides thegeneralized atom selection strategy is also used to select theatoms that are most related to residual for constructing thecandidateatomic set and thus improving the accuracy ofreconstruction

2 Compressed Sensing Theory

e basic assumption of compressed sensing is that thesignal is sparse If a signal s with length N has K nonzerovalues K≪N it is called K sparse signal In some realapplications the signal s is not sparse so the signal s is notcompressible In order to make the original signal sparse thesparse basis matrix Ψ φ1φ2 φN1113864 1113865 is used It can beexpressed as

s 1113944N

i1φiαi Ψα (1)

where α is the sparse signal with K nonzero values (K≪N)e sparse basis methods include fast Fourier transformbasis discrete Fourier transform basis wavelet transform

basis and redundant dictionary In order to compress theobserved signal s a measurement matrix Φ is designed todeal with the signal s e compressed signal is

y Φs (2)

where Φ isin RMtimesN y isin Rmtimes1 and M≪N e length ofobserved signal is N and the length of compressed signal isM Mixed with (1) (2) can be expressed as

y ΦΨα Aα (3)

where A ΦΨ is a matrix (M times N) called sensing matrixe compressed signal measurement matrix and sparse

basis are known e aim of reconstruction is to recover thesparse signal and original signal under the above unknowninformation erefore the number of sensing matrix rowsis smaller than the number of columns in (3) It cannot besolved by traditional method In the compressed sensingmethod the l0 minimum norm method is used to solve (3)at is

s

argmin α0

st y Aα(4)

Some researchers also proposed to use l1 minimumnorm method to solve (3) at is

s

argmin α1

st y Aα(5)

3 MAOMP Algorithm

e SAMP algorithm uses a fixed step size method to es-timate the sparsity which is expressed in as follows

k k + 1 (6)

L k times s (7)

where k is the iteration number L is the estimated sparsityand s is fixed step size From (6) and (7) the estimatedsparsity becomes larger with the increasing iterationnumber but the step size is fixed erefore the estimatedsparsity is often insufficient or overestimated which hasnegative influence on signal reconstruction accuracy Inorder to solve this question MAOMP algorithm was pro-posed e MAOMP algorithm uses (8) (9) and (10) toestimate sparsity

ATy

2 lt

1 minus δs1 + δs

1113968 y2

L0 L0 + 1

(8)

sk+1 lceilβ times skrceil (9)

Lk+1 Lk + sk+1 (10)

where δs is a constant value between 0 and 1 A is the sensingmatrix y is compressed vector k is the number of iterations

2 Journal of Electrical and Computer Engineering

Lk is the estimated sparsity β is also a constant value be-tween 0 and 1 and lceilarceil denotes the smallest integer that isnot smaller than a If (8) is satisfied the initial sparsity will beincreased by 1 If (8) is not satisfied MAOMP will use (9)and (10) to continue estimating the sparsity e variablestep method can be expressed as (9) and (10) In (9) and (10)it can be seen that the step size is gradually reduced to 1 withthe increase of iteration number is can make the sparsityvalue more accurate so the algorithm can improve recon-struction accuracye detailed steps of MAOMP are shownin Algorithm 1

4 Proposed Method

Although theMAOMPuses (8) tomodify the initial sparsity toreduce the iteration number of sparsity estimation it alsoincreases the computational complexity of sparsity initialestimation Besides if the initial step size is smaller the stepsize based on (9) will be rapidly reduced to 1 When the initialsparsity is far away from the real sparsity it will costmuch timeto converge And in [19] the researcher has shown that whenthe real sparsity is relatively larger whatever the value of δs isthe optimal result of the estimated initial sparsity is only abouthalf of the real sparsity value ere is still a large distancebetween the estimated initial sparsity and the real sparsityerefore the algorithm may require more iterations to makethe estimated sparsity value approach the real sparsity value

To overcome these problems in MAOMP we improve theMAOMPmethod in terms of estimating sparsity and selectingatom Firstly we directly set the initial sparsity as 1 and use anonlinear function to adjust the step size to make it have largervalue at the beginning phase of iteration process and smallervalue with the increasing of iteration It is expressed as follows

sk+1 lceilsk timesk

a1113888 1113889

minus krceil (11)

Lk+1 Lk + sk+1 (12)

where k is the number of iterations sk is the step size a isconstant that is larger than 1 lceilarceil denotes the smallest integerthat is not smaller than a and Lk is the estimated sparsityAccording to (11) at the beginning of iteration the step size islarger As the number of iterations increases the step sizebecomes smaller and gradually decreases to 1is means thatif the distance between the real sparsity and initial sparsity islarger the proposed method can quickly make the estimatedsparsity close to the real sparsity to reduce the consumptiontime for estimating the sparsity As seen from (11) with in-creasing the number of iterations the distance between the realsparsity and estimated sparsity is close and we use a smallerstep size to adjust the estimated sparsity to prevent the esti-mated sparsity from being insufficient or overestimated iscan make the sparsity estimation more accurate and faster

e value of nonlinear function (ka)minus k is shown inFigure 1 where k is the number of iterations and a 2 FromFigure 1 we can see that the function has faster descentspeed for smaller iteration and slower descent speed for

larger iteration is will make the estimated sparsity movetoward the real sparsity faster at the beginning phase ofiteration process and move toward the real sparsity slower atthe end phase of iteration process erefore it makes theproposed method have faster convergence speed and lowerreconstruction error However if a is too large the step sizewill be also very large at the beginning phase of iterationprocessis can lead to overestimation of the sparsity valuethus affecting the accuracy of the algorithm

Secondly we use the generalized orthogonal matchingpursuit to select the atoms according to the correlation be-tween the sensingmatrix and residual vector It is expressed as

InputSensing matrix Amtimesn

Observation vector ymtimes1

Initial step size s0Constant parameter βStop threshold εInitialization parameter

1113954x 0initialize signal approximationk 0loop indexL0 1initial sparsity estimateS0 empty empty preliminary index setC0 emptyempty candidate index setF0 emptyempty support index setr0 yresidual vectordone 0 while loop flag

While (simdone)(1) Compute the projective set

S0 max(|ATy| L0)

(2) Compute the estimated sparsityIf AT

S0y2lt 1 minus δs

1 + δs

1113968y2

en L0 L0 + 1 return on step (1)Else L1 L0 c0 y minus As0

A+

s0y k 1 return on step (3)

(3) Compute a new projective setSk max(|ATrkminus1| Lk)

(4) Merge to update the candidate index setCk Fkminus1cup Sk

(5) Get the estimate signal value by least squares algorithmx

Ck A+

Cky (AT

CkACk

)minus 1ATCk

y

(6) Prune to obtain current support index setF max(|1113954xCk

| Lk)

(7) Update signal final estimatex

k (ATFAF)minus 1AT

Fy

residual error rc y minus AFx

k

(8) check the iteration conditionIf rc2le ε

done 1 quit iterationelse if rc2le rkminus12sk+1 1113862β times sk1113863

Lk+1 Lk + sk+1k k + 1elseFk F

rk rc

endendOutput 1113954x xk( s-sparse approximation of signal x)

ALGORITHM 1 MAOMP algorithm

Journal of Electrical and Computer Engineering 3

Sk max ATrkminus1

11138681113868111386811138681113868

11138681113868111386811138681113868 num1113874 1113875 (13)

where Sk is projective set num is the fixed number to selectatomics A is sensing matrix and rk is residual errorCompared with MAOMP our proposed method has thenumber of selected atomics num fixed at each iteration isreduces the algorithm complexity e more the correlationatomics are selected to extend the candidate atomic set thebetter the accuracy is However if the number of selectedatomics is larger it will contain some atomics with lowercorrelation and reduce algorithm accuracy How to choose asuitable num is discussed in the simulation

e proposed algorithm begins to select atoms using thegeneralized orthogonal matching pursuit method thenupdates the step size and estimates sparsity using the newvariable step method e detailed steps of the proposedmethod are shown in Algorithm 2

5 Results and Discussion

51 Parameters Selection e selection of parameter a andthe number of selected atomics num directly affect the al-gorithm performance In this section we use the sourcesignal as a Gaussian signal with length N 256 and mea-surement value M 128 And the stop iteration parameterε 10minus 6 erefore we firstly set the num as 25 and the a asvariable to search the optimal a e relationship betweenthe a and reconstruction probability is shown in Figure 2From Figure 2(a) we can see that the reconstruction prob-ability decreases with the increasing of sparsity level Whenthe sparsity level is between 40 and 45 the reconstructionprobability for the proposed method with a 2 is the highestnext is a 3 When the sparsity level is greater than 50 thereconstruction probability for the proposed method with a

3 is the highest next is a 2 Because the large step size leadsto overestimation the reconstruction probability drops rap-idly at K 30 when a 4 And when a 32 the recon-struction probability is not as good as a 3 Based on theabove analysis the reconstruction probability for a 3 ishigher than the other values for the larger sparsity levelHowever the reconstruction probability becomes poor when

agt 3us we select the value of a between 2 and 3 as shownin Figure 2(b) From Figure 2(b) the reconstruction proba-bility for the proposed method with a 25 is the highestvalue us we select a 25 for the experiments

At the next experiment we set the a as constant and thenumber of selected atomics num as variable e result can beseen in Figures 3 and 4 From Figure 3 the initial step size is 5and we can see that as the sparsity level increases the recon-struction probability of proposed method for num 10 is thelowest compared with the other valuesWhen the sparsity level isbetween 45 and 50 the reconstruction probability for the pro-posed method with num 30 is the highest next is num 20When the sparsity level is greater than 55 the reconstructionprobability of proposed method for num 20 is the highestMoreover from Figure 4 when initial step size is 10 we can seethat the reconstruction probability of proposed method fornum 20 is the lowest compared with the other values Fur-thermore the reconstruction probability of proposedmethod fornum 40 is the highest for all the sparsity levels ComparingFigures 3 and 4 we can conclude that when num is four times theinitial step size the reconstruction quality is the best

Based on the above analysis we set the number of se-lected atomics as four times the initial step size and theparameter a as 25 in the flowing experiments e exper-iment conditions are as follows the CPU is Intelreg Coretrade i5-8300H at 230GHz and the size of RAM is 8GB

52 One-Dimensional Signal Reconstruction In this sectionwe use one-dimensional signal as source signal to test thereconstruction performance of different methods (SAMPalgorithm MAOMP algorithm and our proposed method)e source signal is a Gaussian signal with length N 256and sparsity level K 40 And the stop iteration parameterε 10minus 6e initial step size of all methods is set as 5 and 10As known in [19] in MAOMP algorithm when the pa-rameter δs is 02 and β is 06 the result of signal recon-struction is the best erefore we select the parameter δs as02 and β as 06 in these experiments In our proposed al-gorithm the parameter a 25 and num are four times theinitial step size Figure 5 shows the reconstruction proba-bility of signals under different measurement values When

(ka)minus

k

15 2 25 3 35 4 45 5 55 61k

0020406081

121416182

Figure 1 e value of (ka)minus k increases with k where a 2

4 Journal of Electrical and Computer Engineering

the measurement value increases the reconstructionprobability becomes higher Our proposed method is sig-nificantly better than the other two algorithms When themeasurement value is 70 the algorithm we proposed canreconstruct the signal but the reconstruction probability ofthe other two algorithms is still 0 Moreover whatever themeasurement value is our proposed algorithm has higherreconstruction probability than the other two algorithmsis means that the proposed method has higher recon-struction probability under different measurement values

Next when comparing the reconstruction probability ofthe algorithm under different sparsity levels the fixedmeasurement value M 128 and the other experimentalconditions are the same as the previous experiment Figure 6shows the reconstruction probability with different sparsitylevels

It can be seen from Figure 6 that the reconstructionprobability decreases with the increasing of sparsity levelWhen 45leKle 50 the SAMP with s 5 begins to declinebut other algorithms still maintain almost 100 recon-struction probability When 50leKle 70 all algorithm re-construction probability values begin to decline Inparticular in K 65 the reconstruction probability ofSAMP and MAOMP in which the initial step size is 10dropped to 0 However the reconstruction probability of ourproposed method is higher than that of the other two al-gorithms And when the sparsity level is 70 the proposedmethod with s 10 still can successfully reconstruct thesignal with probability 3754 but the reconstructionprobability of the other two algorithms has dropped to 0is shows that the proposed method has higher recon-struction probability under different sparsity levels By

InputSensing matrix Amtimesn

Observation vector ymtimes1

Constant parameter a

Initial step size s0e number of atoms selected each time numTolerance used to exit loop ε

Initialize parameter1113954x 0 initialize signal approximationk 1 loop indexL0 1 initial sparsity estimatedone 0 while loop flagS0 empty empty preliminary index setC0 empty empty candidate index setF0 empty empty support index set

While (simdone)(1) Compute the projective set

S0 max(|ATrkminus1| num)

(2) Merge to update the candidate index setCk Fkminus1cup Sk

(3) Get the estimate signal value and residual error by least squares algorithmx

Ck A+

Cky (AT

CkACk

)minus 1ATCk

y

rk y minus ACkx

Ck

(4) prune to obtain current support index setF max(|1113954xCk

| Lk)

(5) update signal final estimate by least squares algorithm and compute residual errorx

k (ATFAF)minus 1AT

Fy

residual error rc y minus AFx

k

(6) Check the iteration conditionIf rc2 le ε

done 1 quit iterationelse if rc2le rkminus12k k + 1sk+1 1113964sk times (ka)minus k

1113965

Lk+1 Lk + sk+1elseFk F

rk rc

endEndOutput1113954x xk(s-sparse approximation of signal x)

ALGORITHM 2 Proposed algorithm

Journal of Electrical and Computer Engineering 5

comparing the one-dimensional signal with different mea-surement values and different sparsity levels it can be seenthat our proposed method has obvious advantage in one-dimensional signal reconstruction

53 Two-Dimensional Image Reconstruction In this sectionwe use a grayscale image of size 256 times 256 called Lena assource signal to test the reconstruction performance ofdifferent methods (SAMP algorithm MAOMP algo-rithm and our proposed method) e wavelet basis is

used as the sparse basis to sparse the image e initialstep sizes of the three algorithms are 1 5 and 10 re-spectively e stop iteration parameter ε 10minus 6 the β ofthe MAOMP algorithm is 06 and δs takes 02 In ourproposed method parameter a 25 and num are fourtimes the initial step size Figure 7 shows the two-di-mensional signal Lena that is reconstructed by ourproposed method SAMP method and MAOMP methodwith sampling rate 06 We use the peak signal-to-noiseratio (PSNR) to represent the quality of image recon-struction and it can be expressed as

20 25 30 35 40 45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 1a = 2a = 3

a = 32a = 4

(a)

Sparsity level K40 45 50 55 60 65

0102030405060708090

100

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 28a = 29a = 3

a = 21a = 22a = 23a = 24

a = 25a = 26a = 27

(b)

Figure 2 e effect of different a on reconstruction probability under different sparsity levels

45 50 55 60 65 7020

30

40

50

60

70

80

90

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 20num = 30num = 40

num = 50num = 60

Figure 4 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 10

40 45 50 55 60 650

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 10num = 15num = 20

num = 25num = 30

Figure 3 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 5

6 Journal of Electrical and Computer Engineering

MSE 1

M times N1113944

Mminus1

i01113944

Nminus1

j0|1113954x(i j) minus x(i j)|

2

PSNR 10 times log102n minus 1( )2

MSE1113888 1113889

(14)

where M N 256 1113954x(i j) represents the reconstructedvalue of the corresponding position of the test imageand x(i j) represents the original value of the corre-sponding position of the test picture MSE representsmean squared error MAX1113954x represents the maximumvalue of the image e size of the image is 256 because28 256 so that the sampling point is 8 bits and the

value of n is 8 e larger the PSNR value the better theimage quality

Figures 7 to 9 show the original signal and the recon-structed signals by SAMP MAOMP and the proposedmethod with initial step sizes of 1 5 and 10 respectively Itcan be seen from Figures 7 and 8 that the PSNR of SAMPand MAOMP decreases when the initial step size becomeslarger e SAMP cannot reconstruct signal when the initialstep size is 5 and 10 e PSNR is also lower for MAOMPalgorithm when the initial step size is 10 is is because alarge initial step size causes the sparsity value to be over-estimated and affects accuracy However in Figure 9 ourproposedmethod uses (11) to adjust the step size to make thestep size gradually decreaseis can prevent overestimation

70 75 80 85 90 95 100 105 110 115 1200

102030405060708090

100

No of measurements

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the measurementsK(M = 40 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 5 e reconstruction probability of signals under different measurement values

45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 6 e reconstruction probability of the algorithm with different sparsity levels

Journal of Electrical and Computer Engineering 7

Original image

(a)

SAMP s = 1 PSNR = 282053

(b)

SAMP s = 5 PSNR = 72404

(c)

SAMP s = 10 PSNR = 72404

(d)

Figure 7 e PSNR value with different initial step sizes in SAMP (a) e original image (b) SAMP when initial step size is 1 (c) SAMPwhen initial step size is 5 (d) SAMP when initial step size is 10

Original image

(a)

MAOMP s = 1 PSNR = 288606

(b)

Figure 8 Continued

8 Journal of Electrical and Computer Engineering

MAOMP s = 5 PSNR = 163367

(c)

MAOMP s = 10 PSNR = 116794

(d)

Figure 8 e PSNR value with different initial step sizes in MAOMP (a) e original image (b) MAOMP when initial step size is 1 (c)MAOMP when initial step size is 5 (d) MAOMP when initial step size is 10

Original image

(a)

Proposed s = 1 PSNR = 291934

(b)

Proposed s = 5 PSNR = 292986

(c)

Proposed s = 10 PSNR = 298454

(d)

Figure 9 e PSNR value with different initial step sizes in the proposed method (a) e original image (b) the proposed method wheninitial step size is 1 (c) the proposed method when initial step size is 5 (d) the proposed method when initial step size is 10

Journal of Electrical and Computer Engineering 9

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 3: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

Lk is the estimated sparsity β is also a constant value be-tween 0 and 1 and lceilarceil denotes the smallest integer that isnot smaller than a If (8) is satisfied the initial sparsity will beincreased by 1 If (8) is not satisfied MAOMP will use (9)and (10) to continue estimating the sparsity e variablestep method can be expressed as (9) and (10) In (9) and (10)it can be seen that the step size is gradually reduced to 1 withthe increase of iteration number is can make the sparsityvalue more accurate so the algorithm can improve recon-struction accuracye detailed steps of MAOMP are shownin Algorithm 1

4 Proposed Method

Although theMAOMPuses (8) tomodify the initial sparsity toreduce the iteration number of sparsity estimation it alsoincreases the computational complexity of sparsity initialestimation Besides if the initial step size is smaller the stepsize based on (9) will be rapidly reduced to 1 When the initialsparsity is far away from the real sparsity it will costmuch timeto converge And in [19] the researcher has shown that whenthe real sparsity is relatively larger whatever the value of δs isthe optimal result of the estimated initial sparsity is only abouthalf of the real sparsity value ere is still a large distancebetween the estimated initial sparsity and the real sparsityerefore the algorithm may require more iterations to makethe estimated sparsity value approach the real sparsity value

To overcome these problems in MAOMP we improve theMAOMPmethod in terms of estimating sparsity and selectingatom Firstly we directly set the initial sparsity as 1 and use anonlinear function to adjust the step size to make it have largervalue at the beginning phase of iteration process and smallervalue with the increasing of iteration It is expressed as follows

sk+1 lceilsk timesk

a1113888 1113889

minus krceil (11)

Lk+1 Lk + sk+1 (12)

where k is the number of iterations sk is the step size a isconstant that is larger than 1 lceilarceil denotes the smallest integerthat is not smaller than a and Lk is the estimated sparsityAccording to (11) at the beginning of iteration the step size islarger As the number of iterations increases the step sizebecomes smaller and gradually decreases to 1is means thatif the distance between the real sparsity and initial sparsity islarger the proposed method can quickly make the estimatedsparsity close to the real sparsity to reduce the consumptiontime for estimating the sparsity As seen from (11) with in-creasing the number of iterations the distance between the realsparsity and estimated sparsity is close and we use a smallerstep size to adjust the estimated sparsity to prevent the esti-mated sparsity from being insufficient or overestimated iscan make the sparsity estimation more accurate and faster

e value of nonlinear function (ka)minus k is shown inFigure 1 where k is the number of iterations and a 2 FromFigure 1 we can see that the function has faster descentspeed for smaller iteration and slower descent speed for

larger iteration is will make the estimated sparsity movetoward the real sparsity faster at the beginning phase ofiteration process and move toward the real sparsity slower atthe end phase of iteration process erefore it makes theproposed method have faster convergence speed and lowerreconstruction error However if a is too large the step sizewill be also very large at the beginning phase of iterationprocessis can lead to overestimation of the sparsity valuethus affecting the accuracy of the algorithm

Secondly we use the generalized orthogonal matchingpursuit to select the atoms according to the correlation be-tween the sensingmatrix and residual vector It is expressed as

InputSensing matrix Amtimesn

Observation vector ymtimes1

Initial step size s0Constant parameter βStop threshold εInitialization parameter

1113954x 0initialize signal approximationk 0loop indexL0 1initial sparsity estimateS0 empty empty preliminary index setC0 emptyempty candidate index setF0 emptyempty support index setr0 yresidual vectordone 0 while loop flag

While (simdone)(1) Compute the projective set

S0 max(|ATy| L0)

(2) Compute the estimated sparsityIf AT

S0y2lt 1 minus δs

1 + δs

1113968y2

en L0 L0 + 1 return on step (1)Else L1 L0 c0 y minus As0

A+

s0y k 1 return on step (3)

(3) Compute a new projective setSk max(|ATrkminus1| Lk)

(4) Merge to update the candidate index setCk Fkminus1cup Sk

(5) Get the estimate signal value by least squares algorithmx

Ck A+

Cky (AT

CkACk

)minus 1ATCk

y

(6) Prune to obtain current support index setF max(|1113954xCk

| Lk)

(7) Update signal final estimatex

k (ATFAF)minus 1AT

Fy

residual error rc y minus AFx

k

(8) check the iteration conditionIf rc2le ε

done 1 quit iterationelse if rc2le rkminus12sk+1 1113862β times sk1113863

Lk+1 Lk + sk+1k k + 1elseFk F

rk rc

endendOutput 1113954x xk( s-sparse approximation of signal x)

ALGORITHM 1 MAOMP algorithm

Journal of Electrical and Computer Engineering 3

Sk max ATrkminus1

11138681113868111386811138681113868

11138681113868111386811138681113868 num1113874 1113875 (13)

where Sk is projective set num is the fixed number to selectatomics A is sensing matrix and rk is residual errorCompared with MAOMP our proposed method has thenumber of selected atomics num fixed at each iteration isreduces the algorithm complexity e more the correlationatomics are selected to extend the candidate atomic set thebetter the accuracy is However if the number of selectedatomics is larger it will contain some atomics with lowercorrelation and reduce algorithm accuracy How to choose asuitable num is discussed in the simulation

e proposed algorithm begins to select atoms using thegeneralized orthogonal matching pursuit method thenupdates the step size and estimates sparsity using the newvariable step method e detailed steps of the proposedmethod are shown in Algorithm 2

5 Results and Discussion

51 Parameters Selection e selection of parameter a andthe number of selected atomics num directly affect the al-gorithm performance In this section we use the sourcesignal as a Gaussian signal with length N 256 and mea-surement value M 128 And the stop iteration parameterε 10minus 6 erefore we firstly set the num as 25 and the a asvariable to search the optimal a e relationship betweenthe a and reconstruction probability is shown in Figure 2From Figure 2(a) we can see that the reconstruction prob-ability decreases with the increasing of sparsity level Whenthe sparsity level is between 40 and 45 the reconstructionprobability for the proposed method with a 2 is the highestnext is a 3 When the sparsity level is greater than 50 thereconstruction probability for the proposed method with a

3 is the highest next is a 2 Because the large step size leadsto overestimation the reconstruction probability drops rap-idly at K 30 when a 4 And when a 32 the recon-struction probability is not as good as a 3 Based on theabove analysis the reconstruction probability for a 3 ishigher than the other values for the larger sparsity levelHowever the reconstruction probability becomes poor when

agt 3us we select the value of a between 2 and 3 as shownin Figure 2(b) From Figure 2(b) the reconstruction proba-bility for the proposed method with a 25 is the highestvalue us we select a 25 for the experiments

At the next experiment we set the a as constant and thenumber of selected atomics num as variable e result can beseen in Figures 3 and 4 From Figure 3 the initial step size is 5and we can see that as the sparsity level increases the recon-struction probability of proposed method for num 10 is thelowest compared with the other valuesWhen the sparsity level isbetween 45 and 50 the reconstruction probability for the pro-posed method with num 30 is the highest next is num 20When the sparsity level is greater than 55 the reconstructionprobability of proposed method for num 20 is the highestMoreover from Figure 4 when initial step size is 10 we can seethat the reconstruction probability of proposed method fornum 20 is the lowest compared with the other values Fur-thermore the reconstruction probability of proposedmethod fornum 40 is the highest for all the sparsity levels ComparingFigures 3 and 4 we can conclude that when num is four times theinitial step size the reconstruction quality is the best

Based on the above analysis we set the number of se-lected atomics as four times the initial step size and theparameter a as 25 in the flowing experiments e exper-iment conditions are as follows the CPU is Intelreg Coretrade i5-8300H at 230GHz and the size of RAM is 8GB

52 One-Dimensional Signal Reconstruction In this sectionwe use one-dimensional signal as source signal to test thereconstruction performance of different methods (SAMPalgorithm MAOMP algorithm and our proposed method)e source signal is a Gaussian signal with length N 256and sparsity level K 40 And the stop iteration parameterε 10minus 6e initial step size of all methods is set as 5 and 10As known in [19] in MAOMP algorithm when the pa-rameter δs is 02 and β is 06 the result of signal recon-struction is the best erefore we select the parameter δs as02 and β as 06 in these experiments In our proposed al-gorithm the parameter a 25 and num are four times theinitial step size Figure 5 shows the reconstruction proba-bility of signals under different measurement values When

(ka)minus

k

15 2 25 3 35 4 45 5 55 61k

0020406081

121416182

Figure 1 e value of (ka)minus k increases with k where a 2

4 Journal of Electrical and Computer Engineering

the measurement value increases the reconstructionprobability becomes higher Our proposed method is sig-nificantly better than the other two algorithms When themeasurement value is 70 the algorithm we proposed canreconstruct the signal but the reconstruction probability ofthe other two algorithms is still 0 Moreover whatever themeasurement value is our proposed algorithm has higherreconstruction probability than the other two algorithmsis means that the proposed method has higher recon-struction probability under different measurement values

Next when comparing the reconstruction probability ofthe algorithm under different sparsity levels the fixedmeasurement value M 128 and the other experimentalconditions are the same as the previous experiment Figure 6shows the reconstruction probability with different sparsitylevels

It can be seen from Figure 6 that the reconstructionprobability decreases with the increasing of sparsity levelWhen 45leKle 50 the SAMP with s 5 begins to declinebut other algorithms still maintain almost 100 recon-struction probability When 50leKle 70 all algorithm re-construction probability values begin to decline Inparticular in K 65 the reconstruction probability ofSAMP and MAOMP in which the initial step size is 10dropped to 0 However the reconstruction probability of ourproposed method is higher than that of the other two al-gorithms And when the sparsity level is 70 the proposedmethod with s 10 still can successfully reconstruct thesignal with probability 3754 but the reconstructionprobability of the other two algorithms has dropped to 0is shows that the proposed method has higher recon-struction probability under different sparsity levels By

InputSensing matrix Amtimesn

Observation vector ymtimes1

Constant parameter a

Initial step size s0e number of atoms selected each time numTolerance used to exit loop ε

Initialize parameter1113954x 0 initialize signal approximationk 1 loop indexL0 1 initial sparsity estimatedone 0 while loop flagS0 empty empty preliminary index setC0 empty empty candidate index setF0 empty empty support index set

While (simdone)(1) Compute the projective set

S0 max(|ATrkminus1| num)

(2) Merge to update the candidate index setCk Fkminus1cup Sk

(3) Get the estimate signal value and residual error by least squares algorithmx

Ck A+

Cky (AT

CkACk

)minus 1ATCk

y

rk y minus ACkx

Ck

(4) prune to obtain current support index setF max(|1113954xCk

| Lk)

(5) update signal final estimate by least squares algorithm and compute residual errorx

k (ATFAF)minus 1AT

Fy

residual error rc y minus AFx

k

(6) Check the iteration conditionIf rc2 le ε

done 1 quit iterationelse if rc2le rkminus12k k + 1sk+1 1113964sk times (ka)minus k

1113965

Lk+1 Lk + sk+1elseFk F

rk rc

endEndOutput1113954x xk(s-sparse approximation of signal x)

ALGORITHM 2 Proposed algorithm

Journal of Electrical and Computer Engineering 5

comparing the one-dimensional signal with different mea-surement values and different sparsity levels it can be seenthat our proposed method has obvious advantage in one-dimensional signal reconstruction

53 Two-Dimensional Image Reconstruction In this sectionwe use a grayscale image of size 256 times 256 called Lena assource signal to test the reconstruction performance ofdifferent methods (SAMP algorithm MAOMP algo-rithm and our proposed method) e wavelet basis is

used as the sparse basis to sparse the image e initialstep sizes of the three algorithms are 1 5 and 10 re-spectively e stop iteration parameter ε 10minus 6 the β ofthe MAOMP algorithm is 06 and δs takes 02 In ourproposed method parameter a 25 and num are fourtimes the initial step size Figure 7 shows the two-di-mensional signal Lena that is reconstructed by ourproposed method SAMP method and MAOMP methodwith sampling rate 06 We use the peak signal-to-noiseratio (PSNR) to represent the quality of image recon-struction and it can be expressed as

20 25 30 35 40 45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 1a = 2a = 3

a = 32a = 4

(a)

Sparsity level K40 45 50 55 60 65

0102030405060708090

100

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 28a = 29a = 3

a = 21a = 22a = 23a = 24

a = 25a = 26a = 27

(b)

Figure 2 e effect of different a on reconstruction probability under different sparsity levels

45 50 55 60 65 7020

30

40

50

60

70

80

90

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 20num = 30num = 40

num = 50num = 60

Figure 4 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 10

40 45 50 55 60 650

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 10num = 15num = 20

num = 25num = 30

Figure 3 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 5

6 Journal of Electrical and Computer Engineering

MSE 1

M times N1113944

Mminus1

i01113944

Nminus1

j0|1113954x(i j) minus x(i j)|

2

PSNR 10 times log102n minus 1( )2

MSE1113888 1113889

(14)

where M N 256 1113954x(i j) represents the reconstructedvalue of the corresponding position of the test imageand x(i j) represents the original value of the corre-sponding position of the test picture MSE representsmean squared error MAX1113954x represents the maximumvalue of the image e size of the image is 256 because28 256 so that the sampling point is 8 bits and the

value of n is 8 e larger the PSNR value the better theimage quality

Figures 7 to 9 show the original signal and the recon-structed signals by SAMP MAOMP and the proposedmethod with initial step sizes of 1 5 and 10 respectively Itcan be seen from Figures 7 and 8 that the PSNR of SAMPand MAOMP decreases when the initial step size becomeslarger e SAMP cannot reconstruct signal when the initialstep size is 5 and 10 e PSNR is also lower for MAOMPalgorithm when the initial step size is 10 is is because alarge initial step size causes the sparsity value to be over-estimated and affects accuracy However in Figure 9 ourproposedmethod uses (11) to adjust the step size to make thestep size gradually decreaseis can prevent overestimation

70 75 80 85 90 95 100 105 110 115 1200

102030405060708090

100

No of measurements

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the measurementsK(M = 40 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 5 e reconstruction probability of signals under different measurement values

45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 6 e reconstruction probability of the algorithm with different sparsity levels

Journal of Electrical and Computer Engineering 7

Original image

(a)

SAMP s = 1 PSNR = 282053

(b)

SAMP s = 5 PSNR = 72404

(c)

SAMP s = 10 PSNR = 72404

(d)

Figure 7 e PSNR value with different initial step sizes in SAMP (a) e original image (b) SAMP when initial step size is 1 (c) SAMPwhen initial step size is 5 (d) SAMP when initial step size is 10

Original image

(a)

MAOMP s = 1 PSNR = 288606

(b)

Figure 8 Continued

8 Journal of Electrical and Computer Engineering

MAOMP s = 5 PSNR = 163367

(c)

MAOMP s = 10 PSNR = 116794

(d)

Figure 8 e PSNR value with different initial step sizes in MAOMP (a) e original image (b) MAOMP when initial step size is 1 (c)MAOMP when initial step size is 5 (d) MAOMP when initial step size is 10

Original image

(a)

Proposed s = 1 PSNR = 291934

(b)

Proposed s = 5 PSNR = 292986

(c)

Proposed s = 10 PSNR = 298454

(d)

Figure 9 e PSNR value with different initial step sizes in the proposed method (a) e original image (b) the proposed method wheninitial step size is 1 (c) the proposed method when initial step size is 5 (d) the proposed method when initial step size is 10

Journal of Electrical and Computer Engineering 9

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 4: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

Sk max ATrkminus1

11138681113868111386811138681113868

11138681113868111386811138681113868 num1113874 1113875 (13)

where Sk is projective set num is the fixed number to selectatomics A is sensing matrix and rk is residual errorCompared with MAOMP our proposed method has thenumber of selected atomics num fixed at each iteration isreduces the algorithm complexity e more the correlationatomics are selected to extend the candidate atomic set thebetter the accuracy is However if the number of selectedatomics is larger it will contain some atomics with lowercorrelation and reduce algorithm accuracy How to choose asuitable num is discussed in the simulation

e proposed algorithm begins to select atoms using thegeneralized orthogonal matching pursuit method thenupdates the step size and estimates sparsity using the newvariable step method e detailed steps of the proposedmethod are shown in Algorithm 2

5 Results and Discussion

51 Parameters Selection e selection of parameter a andthe number of selected atomics num directly affect the al-gorithm performance In this section we use the sourcesignal as a Gaussian signal with length N 256 and mea-surement value M 128 And the stop iteration parameterε 10minus 6 erefore we firstly set the num as 25 and the a asvariable to search the optimal a e relationship betweenthe a and reconstruction probability is shown in Figure 2From Figure 2(a) we can see that the reconstruction prob-ability decreases with the increasing of sparsity level Whenthe sparsity level is between 40 and 45 the reconstructionprobability for the proposed method with a 2 is the highestnext is a 3 When the sparsity level is greater than 50 thereconstruction probability for the proposed method with a

3 is the highest next is a 2 Because the large step size leadsto overestimation the reconstruction probability drops rap-idly at K 30 when a 4 And when a 32 the recon-struction probability is not as good as a 3 Based on theabove analysis the reconstruction probability for a 3 ishigher than the other values for the larger sparsity levelHowever the reconstruction probability becomes poor when

agt 3us we select the value of a between 2 and 3 as shownin Figure 2(b) From Figure 2(b) the reconstruction proba-bility for the proposed method with a 25 is the highestvalue us we select a 25 for the experiments

At the next experiment we set the a as constant and thenumber of selected atomics num as variable e result can beseen in Figures 3 and 4 From Figure 3 the initial step size is 5and we can see that as the sparsity level increases the recon-struction probability of proposed method for num 10 is thelowest compared with the other valuesWhen the sparsity level isbetween 45 and 50 the reconstruction probability for the pro-posed method with num 30 is the highest next is num 20When the sparsity level is greater than 55 the reconstructionprobability of proposed method for num 20 is the highestMoreover from Figure 4 when initial step size is 10 we can seethat the reconstruction probability of proposed method fornum 20 is the lowest compared with the other values Fur-thermore the reconstruction probability of proposedmethod fornum 40 is the highest for all the sparsity levels ComparingFigures 3 and 4 we can conclude that when num is four times theinitial step size the reconstruction quality is the best

Based on the above analysis we set the number of se-lected atomics as four times the initial step size and theparameter a as 25 in the flowing experiments e exper-iment conditions are as follows the CPU is Intelreg Coretrade i5-8300H at 230GHz and the size of RAM is 8GB

52 One-Dimensional Signal Reconstruction In this sectionwe use one-dimensional signal as source signal to test thereconstruction performance of different methods (SAMPalgorithm MAOMP algorithm and our proposed method)e source signal is a Gaussian signal with length N 256and sparsity level K 40 And the stop iteration parameterε 10minus 6e initial step size of all methods is set as 5 and 10As known in [19] in MAOMP algorithm when the pa-rameter δs is 02 and β is 06 the result of signal recon-struction is the best erefore we select the parameter δs as02 and β as 06 in these experiments In our proposed al-gorithm the parameter a 25 and num are four times theinitial step size Figure 5 shows the reconstruction proba-bility of signals under different measurement values When

(ka)minus

k

15 2 25 3 35 4 45 5 55 61k

0020406081

121416182

Figure 1 e value of (ka)minus k increases with k where a 2

4 Journal of Electrical and Computer Engineering

the measurement value increases the reconstructionprobability becomes higher Our proposed method is sig-nificantly better than the other two algorithms When themeasurement value is 70 the algorithm we proposed canreconstruct the signal but the reconstruction probability ofthe other two algorithms is still 0 Moreover whatever themeasurement value is our proposed algorithm has higherreconstruction probability than the other two algorithmsis means that the proposed method has higher recon-struction probability under different measurement values

Next when comparing the reconstruction probability ofthe algorithm under different sparsity levels the fixedmeasurement value M 128 and the other experimentalconditions are the same as the previous experiment Figure 6shows the reconstruction probability with different sparsitylevels

It can be seen from Figure 6 that the reconstructionprobability decreases with the increasing of sparsity levelWhen 45leKle 50 the SAMP with s 5 begins to declinebut other algorithms still maintain almost 100 recon-struction probability When 50leKle 70 all algorithm re-construction probability values begin to decline Inparticular in K 65 the reconstruction probability ofSAMP and MAOMP in which the initial step size is 10dropped to 0 However the reconstruction probability of ourproposed method is higher than that of the other two al-gorithms And when the sparsity level is 70 the proposedmethod with s 10 still can successfully reconstruct thesignal with probability 3754 but the reconstructionprobability of the other two algorithms has dropped to 0is shows that the proposed method has higher recon-struction probability under different sparsity levels By

InputSensing matrix Amtimesn

Observation vector ymtimes1

Constant parameter a

Initial step size s0e number of atoms selected each time numTolerance used to exit loop ε

Initialize parameter1113954x 0 initialize signal approximationk 1 loop indexL0 1 initial sparsity estimatedone 0 while loop flagS0 empty empty preliminary index setC0 empty empty candidate index setF0 empty empty support index set

While (simdone)(1) Compute the projective set

S0 max(|ATrkminus1| num)

(2) Merge to update the candidate index setCk Fkminus1cup Sk

(3) Get the estimate signal value and residual error by least squares algorithmx

Ck A+

Cky (AT

CkACk

)minus 1ATCk

y

rk y minus ACkx

Ck

(4) prune to obtain current support index setF max(|1113954xCk

| Lk)

(5) update signal final estimate by least squares algorithm and compute residual errorx

k (ATFAF)minus 1AT

Fy

residual error rc y minus AFx

k

(6) Check the iteration conditionIf rc2 le ε

done 1 quit iterationelse if rc2le rkminus12k k + 1sk+1 1113964sk times (ka)minus k

1113965

Lk+1 Lk + sk+1elseFk F

rk rc

endEndOutput1113954x xk(s-sparse approximation of signal x)

ALGORITHM 2 Proposed algorithm

Journal of Electrical and Computer Engineering 5

comparing the one-dimensional signal with different mea-surement values and different sparsity levels it can be seenthat our proposed method has obvious advantage in one-dimensional signal reconstruction

53 Two-Dimensional Image Reconstruction In this sectionwe use a grayscale image of size 256 times 256 called Lena assource signal to test the reconstruction performance ofdifferent methods (SAMP algorithm MAOMP algo-rithm and our proposed method) e wavelet basis is

used as the sparse basis to sparse the image e initialstep sizes of the three algorithms are 1 5 and 10 re-spectively e stop iteration parameter ε 10minus 6 the β ofthe MAOMP algorithm is 06 and δs takes 02 In ourproposed method parameter a 25 and num are fourtimes the initial step size Figure 7 shows the two-di-mensional signal Lena that is reconstructed by ourproposed method SAMP method and MAOMP methodwith sampling rate 06 We use the peak signal-to-noiseratio (PSNR) to represent the quality of image recon-struction and it can be expressed as

20 25 30 35 40 45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 1a = 2a = 3

a = 32a = 4

(a)

Sparsity level K40 45 50 55 60 65

0102030405060708090

100

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 28a = 29a = 3

a = 21a = 22a = 23a = 24

a = 25a = 26a = 27

(b)

Figure 2 e effect of different a on reconstruction probability under different sparsity levels

45 50 55 60 65 7020

30

40

50

60

70

80

90

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 20num = 30num = 40

num = 50num = 60

Figure 4 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 10

40 45 50 55 60 650

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 10num = 15num = 20

num = 25num = 30

Figure 3 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 5

6 Journal of Electrical and Computer Engineering

MSE 1

M times N1113944

Mminus1

i01113944

Nminus1

j0|1113954x(i j) minus x(i j)|

2

PSNR 10 times log102n minus 1( )2

MSE1113888 1113889

(14)

where M N 256 1113954x(i j) represents the reconstructedvalue of the corresponding position of the test imageand x(i j) represents the original value of the corre-sponding position of the test picture MSE representsmean squared error MAX1113954x represents the maximumvalue of the image e size of the image is 256 because28 256 so that the sampling point is 8 bits and the

value of n is 8 e larger the PSNR value the better theimage quality

Figures 7 to 9 show the original signal and the recon-structed signals by SAMP MAOMP and the proposedmethod with initial step sizes of 1 5 and 10 respectively Itcan be seen from Figures 7 and 8 that the PSNR of SAMPand MAOMP decreases when the initial step size becomeslarger e SAMP cannot reconstruct signal when the initialstep size is 5 and 10 e PSNR is also lower for MAOMPalgorithm when the initial step size is 10 is is because alarge initial step size causes the sparsity value to be over-estimated and affects accuracy However in Figure 9 ourproposedmethod uses (11) to adjust the step size to make thestep size gradually decreaseis can prevent overestimation

70 75 80 85 90 95 100 105 110 115 1200

102030405060708090

100

No of measurements

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the measurementsK(M = 40 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 5 e reconstruction probability of signals under different measurement values

45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 6 e reconstruction probability of the algorithm with different sparsity levels

Journal of Electrical and Computer Engineering 7

Original image

(a)

SAMP s = 1 PSNR = 282053

(b)

SAMP s = 5 PSNR = 72404

(c)

SAMP s = 10 PSNR = 72404

(d)

Figure 7 e PSNR value with different initial step sizes in SAMP (a) e original image (b) SAMP when initial step size is 1 (c) SAMPwhen initial step size is 5 (d) SAMP when initial step size is 10

Original image

(a)

MAOMP s = 1 PSNR = 288606

(b)

Figure 8 Continued

8 Journal of Electrical and Computer Engineering

MAOMP s = 5 PSNR = 163367

(c)

MAOMP s = 10 PSNR = 116794

(d)

Figure 8 e PSNR value with different initial step sizes in MAOMP (a) e original image (b) MAOMP when initial step size is 1 (c)MAOMP when initial step size is 5 (d) MAOMP when initial step size is 10

Original image

(a)

Proposed s = 1 PSNR = 291934

(b)

Proposed s = 5 PSNR = 292986

(c)

Proposed s = 10 PSNR = 298454

(d)

Figure 9 e PSNR value with different initial step sizes in the proposed method (a) e original image (b) the proposed method wheninitial step size is 1 (c) the proposed method when initial step size is 5 (d) the proposed method when initial step size is 10

Journal of Electrical and Computer Engineering 9

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 5: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

the measurement value increases the reconstructionprobability becomes higher Our proposed method is sig-nificantly better than the other two algorithms When themeasurement value is 70 the algorithm we proposed canreconstruct the signal but the reconstruction probability ofthe other two algorithms is still 0 Moreover whatever themeasurement value is our proposed algorithm has higherreconstruction probability than the other two algorithmsis means that the proposed method has higher recon-struction probability under different measurement values

Next when comparing the reconstruction probability ofthe algorithm under different sparsity levels the fixedmeasurement value M 128 and the other experimentalconditions are the same as the previous experiment Figure 6shows the reconstruction probability with different sparsitylevels

It can be seen from Figure 6 that the reconstructionprobability decreases with the increasing of sparsity levelWhen 45leKle 50 the SAMP with s 5 begins to declinebut other algorithms still maintain almost 100 recon-struction probability When 50leKle 70 all algorithm re-construction probability values begin to decline Inparticular in K 65 the reconstruction probability ofSAMP and MAOMP in which the initial step size is 10dropped to 0 However the reconstruction probability of ourproposed method is higher than that of the other two al-gorithms And when the sparsity level is 70 the proposedmethod with s 10 still can successfully reconstruct thesignal with probability 3754 but the reconstructionprobability of the other two algorithms has dropped to 0is shows that the proposed method has higher recon-struction probability under different sparsity levels By

InputSensing matrix Amtimesn

Observation vector ymtimes1

Constant parameter a

Initial step size s0e number of atoms selected each time numTolerance used to exit loop ε

Initialize parameter1113954x 0 initialize signal approximationk 1 loop indexL0 1 initial sparsity estimatedone 0 while loop flagS0 empty empty preliminary index setC0 empty empty candidate index setF0 empty empty support index set

While (simdone)(1) Compute the projective set

S0 max(|ATrkminus1| num)

(2) Merge to update the candidate index setCk Fkminus1cup Sk

(3) Get the estimate signal value and residual error by least squares algorithmx

Ck A+

Cky (AT

CkACk

)minus 1ATCk

y

rk y minus ACkx

Ck

(4) prune to obtain current support index setF max(|1113954xCk

| Lk)

(5) update signal final estimate by least squares algorithm and compute residual errorx

k (ATFAF)minus 1AT

Fy

residual error rc y minus AFx

k

(6) Check the iteration conditionIf rc2 le ε

done 1 quit iterationelse if rc2le rkminus12k k + 1sk+1 1113964sk times (ka)minus k

1113965

Lk+1 Lk + sk+1elseFk F

rk rc

endEndOutput1113954x xk(s-sparse approximation of signal x)

ALGORITHM 2 Proposed algorithm

Journal of Electrical and Computer Engineering 5

comparing the one-dimensional signal with different mea-surement values and different sparsity levels it can be seenthat our proposed method has obvious advantage in one-dimensional signal reconstruction

53 Two-Dimensional Image Reconstruction In this sectionwe use a grayscale image of size 256 times 256 called Lena assource signal to test the reconstruction performance ofdifferent methods (SAMP algorithm MAOMP algo-rithm and our proposed method) e wavelet basis is

used as the sparse basis to sparse the image e initialstep sizes of the three algorithms are 1 5 and 10 re-spectively e stop iteration parameter ε 10minus 6 the β ofthe MAOMP algorithm is 06 and δs takes 02 In ourproposed method parameter a 25 and num are fourtimes the initial step size Figure 7 shows the two-di-mensional signal Lena that is reconstructed by ourproposed method SAMP method and MAOMP methodwith sampling rate 06 We use the peak signal-to-noiseratio (PSNR) to represent the quality of image recon-struction and it can be expressed as

20 25 30 35 40 45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 1a = 2a = 3

a = 32a = 4

(a)

Sparsity level K40 45 50 55 60 65

0102030405060708090

100

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 28a = 29a = 3

a = 21a = 22a = 23a = 24

a = 25a = 26a = 27

(b)

Figure 2 e effect of different a on reconstruction probability under different sparsity levels

45 50 55 60 65 7020

30

40

50

60

70

80

90

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 20num = 30num = 40

num = 50num = 60

Figure 4 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 10

40 45 50 55 60 650

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 10num = 15num = 20

num = 25num = 30

Figure 3 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 5

6 Journal of Electrical and Computer Engineering

MSE 1

M times N1113944

Mminus1

i01113944

Nminus1

j0|1113954x(i j) minus x(i j)|

2

PSNR 10 times log102n minus 1( )2

MSE1113888 1113889

(14)

where M N 256 1113954x(i j) represents the reconstructedvalue of the corresponding position of the test imageand x(i j) represents the original value of the corre-sponding position of the test picture MSE representsmean squared error MAX1113954x represents the maximumvalue of the image e size of the image is 256 because28 256 so that the sampling point is 8 bits and the

value of n is 8 e larger the PSNR value the better theimage quality

Figures 7 to 9 show the original signal and the recon-structed signals by SAMP MAOMP and the proposedmethod with initial step sizes of 1 5 and 10 respectively Itcan be seen from Figures 7 and 8 that the PSNR of SAMPand MAOMP decreases when the initial step size becomeslarger e SAMP cannot reconstruct signal when the initialstep size is 5 and 10 e PSNR is also lower for MAOMPalgorithm when the initial step size is 10 is is because alarge initial step size causes the sparsity value to be over-estimated and affects accuracy However in Figure 9 ourproposedmethod uses (11) to adjust the step size to make thestep size gradually decreaseis can prevent overestimation

70 75 80 85 90 95 100 105 110 115 1200

102030405060708090

100

No of measurements

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the measurementsK(M = 40 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 5 e reconstruction probability of signals under different measurement values

45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 6 e reconstruction probability of the algorithm with different sparsity levels

Journal of Electrical and Computer Engineering 7

Original image

(a)

SAMP s = 1 PSNR = 282053

(b)

SAMP s = 5 PSNR = 72404

(c)

SAMP s = 10 PSNR = 72404

(d)

Figure 7 e PSNR value with different initial step sizes in SAMP (a) e original image (b) SAMP when initial step size is 1 (c) SAMPwhen initial step size is 5 (d) SAMP when initial step size is 10

Original image

(a)

MAOMP s = 1 PSNR = 288606

(b)

Figure 8 Continued

8 Journal of Electrical and Computer Engineering

MAOMP s = 5 PSNR = 163367

(c)

MAOMP s = 10 PSNR = 116794

(d)

Figure 8 e PSNR value with different initial step sizes in MAOMP (a) e original image (b) MAOMP when initial step size is 1 (c)MAOMP when initial step size is 5 (d) MAOMP when initial step size is 10

Original image

(a)

Proposed s = 1 PSNR = 291934

(b)

Proposed s = 5 PSNR = 292986

(c)

Proposed s = 10 PSNR = 298454

(d)

Figure 9 e PSNR value with different initial step sizes in the proposed method (a) e original image (b) the proposed method wheninitial step size is 1 (c) the proposed method when initial step size is 5 (d) the proposed method when initial step size is 10

Journal of Electrical and Computer Engineering 9

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 6: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

comparing the one-dimensional signal with different mea-surement values and different sparsity levels it can be seenthat our proposed method has obvious advantage in one-dimensional signal reconstruction

53 Two-Dimensional Image Reconstruction In this sectionwe use a grayscale image of size 256 times 256 called Lena assource signal to test the reconstruction performance ofdifferent methods (SAMP algorithm MAOMP algo-rithm and our proposed method) e wavelet basis is

used as the sparse basis to sparse the image e initialstep sizes of the three algorithms are 1 5 and 10 re-spectively e stop iteration parameter ε 10minus 6 the β ofthe MAOMP algorithm is 06 and δs takes 02 In ourproposed method parameter a 25 and num are fourtimes the initial step size Figure 7 shows the two-di-mensional signal Lena that is reconstructed by ourproposed method SAMP method and MAOMP methodwith sampling rate 06 We use the peak signal-to-noiseratio (PSNR) to represent the quality of image recon-struction and it can be expressed as

20 25 30 35 40 45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 1a = 2a = 3

a = 32a = 4

(a)

Sparsity level K40 45 50 55 60 65

0102030405060708090

100

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

a = 28a = 29a = 3

a = 21a = 22a = 23a = 24

a = 25a = 26a = 27

(b)

Figure 2 e effect of different a on reconstruction probability under different sparsity levels

45 50 55 60 65 7020

30

40

50

60

70

80

90

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 20num = 30num = 40

num = 50num = 60

Figure 4 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 10

40 45 50 55 60 650

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

num = 10num = 15num = 20

num = 25num = 30

Figure 3 e effect of num on reconstruction probability underdifferent sparsity levels with initial step size of 5

6 Journal of Electrical and Computer Engineering

MSE 1

M times N1113944

Mminus1

i01113944

Nminus1

j0|1113954x(i j) minus x(i j)|

2

PSNR 10 times log102n minus 1( )2

MSE1113888 1113889

(14)

where M N 256 1113954x(i j) represents the reconstructedvalue of the corresponding position of the test imageand x(i j) represents the original value of the corre-sponding position of the test picture MSE representsmean squared error MAX1113954x represents the maximumvalue of the image e size of the image is 256 because28 256 so that the sampling point is 8 bits and the

value of n is 8 e larger the PSNR value the better theimage quality

Figures 7 to 9 show the original signal and the recon-structed signals by SAMP MAOMP and the proposedmethod with initial step sizes of 1 5 and 10 respectively Itcan be seen from Figures 7 and 8 that the PSNR of SAMPand MAOMP decreases when the initial step size becomeslarger e SAMP cannot reconstruct signal when the initialstep size is 5 and 10 e PSNR is also lower for MAOMPalgorithm when the initial step size is 10 is is because alarge initial step size causes the sparsity value to be over-estimated and affects accuracy However in Figure 9 ourproposedmethod uses (11) to adjust the step size to make thestep size gradually decreaseis can prevent overestimation

70 75 80 85 90 95 100 105 110 115 1200

102030405060708090

100

No of measurements

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the measurementsK(M = 40 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 5 e reconstruction probability of signals under different measurement values

45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 6 e reconstruction probability of the algorithm with different sparsity levels

Journal of Electrical and Computer Engineering 7

Original image

(a)

SAMP s = 1 PSNR = 282053

(b)

SAMP s = 5 PSNR = 72404

(c)

SAMP s = 10 PSNR = 72404

(d)

Figure 7 e PSNR value with different initial step sizes in SAMP (a) e original image (b) SAMP when initial step size is 1 (c) SAMPwhen initial step size is 5 (d) SAMP when initial step size is 10

Original image

(a)

MAOMP s = 1 PSNR = 288606

(b)

Figure 8 Continued

8 Journal of Electrical and Computer Engineering

MAOMP s = 5 PSNR = 163367

(c)

MAOMP s = 10 PSNR = 116794

(d)

Figure 8 e PSNR value with different initial step sizes in MAOMP (a) e original image (b) MAOMP when initial step size is 1 (c)MAOMP when initial step size is 5 (d) MAOMP when initial step size is 10

Original image

(a)

Proposed s = 1 PSNR = 291934

(b)

Proposed s = 5 PSNR = 292986

(c)

Proposed s = 10 PSNR = 298454

(d)

Figure 9 e PSNR value with different initial step sizes in the proposed method (a) e original image (b) the proposed method wheninitial step size is 1 (c) the proposed method when initial step size is 5 (d) the proposed method when initial step size is 10

Journal of Electrical and Computer Engineering 9

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 7: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

MSE 1

M times N1113944

Mminus1

i01113944

Nminus1

j0|1113954x(i j) minus x(i j)|

2

PSNR 10 times log102n minus 1( )2

MSE1113888 1113889

(14)

where M N 256 1113954x(i j) represents the reconstructedvalue of the corresponding position of the test imageand x(i j) represents the original value of the corre-sponding position of the test picture MSE representsmean squared error MAX1113954x represents the maximumvalue of the image e size of the image is 256 because28 256 so that the sampling point is 8 bits and the

value of n is 8 e larger the PSNR value the better theimage quality

Figures 7 to 9 show the original signal and the recon-structed signals by SAMP MAOMP and the proposedmethod with initial step sizes of 1 5 and 10 respectively Itcan be seen from Figures 7 and 8 that the PSNR of SAMPand MAOMP decreases when the initial step size becomeslarger e SAMP cannot reconstruct signal when the initialstep size is 5 and 10 e PSNR is also lower for MAOMPalgorithm when the initial step size is 10 is is because alarge initial step size causes the sparsity value to be over-estimated and affects accuracy However in Figure 9 ourproposedmethod uses (11) to adjust the step size to make thestep size gradually decreaseis can prevent overestimation

70 75 80 85 90 95 100 105 110 115 1200

102030405060708090

100

No of measurements

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the measurementsK(M = 40 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 5 e reconstruction probability of signals under different measurement values

45 50 55 60 65 700

102030405060708090

100

Sparsity level K

The p

roba

bilit

y of

exac

t rec

onstr

uctio

n

Prob of exact recovery vs the signalsparsity K(M = 128 N = 256)(Gaussian)

SAMP s = 5SAMP s = 10MAOMP s = 5

MAOMP s = 10Proposed s = 5Proposed s = 10

Figure 6 e reconstruction probability of the algorithm with different sparsity levels

Journal of Electrical and Computer Engineering 7

Original image

(a)

SAMP s = 1 PSNR = 282053

(b)

SAMP s = 5 PSNR = 72404

(c)

SAMP s = 10 PSNR = 72404

(d)

Figure 7 e PSNR value with different initial step sizes in SAMP (a) e original image (b) SAMP when initial step size is 1 (c) SAMPwhen initial step size is 5 (d) SAMP when initial step size is 10

Original image

(a)

MAOMP s = 1 PSNR = 288606

(b)

Figure 8 Continued

8 Journal of Electrical and Computer Engineering

MAOMP s = 5 PSNR = 163367

(c)

MAOMP s = 10 PSNR = 116794

(d)

Figure 8 e PSNR value with different initial step sizes in MAOMP (a) e original image (b) MAOMP when initial step size is 1 (c)MAOMP when initial step size is 5 (d) MAOMP when initial step size is 10

Original image

(a)

Proposed s = 1 PSNR = 291934

(b)

Proposed s = 5 PSNR = 292986

(c)

Proposed s = 10 PSNR = 298454

(d)

Figure 9 e PSNR value with different initial step sizes in the proposed method (a) e original image (b) the proposed method wheninitial step size is 1 (c) the proposed method when initial step size is 5 (d) the proposed method when initial step size is 10

Journal of Electrical and Computer Engineering 9

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 8: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

Original image

(a)

SAMP s = 1 PSNR = 282053

(b)

SAMP s = 5 PSNR = 72404

(c)

SAMP s = 10 PSNR = 72404

(d)

Figure 7 e PSNR value with different initial step sizes in SAMP (a) e original image (b) SAMP when initial step size is 1 (c) SAMPwhen initial step size is 5 (d) SAMP when initial step size is 10

Original image

(a)

MAOMP s = 1 PSNR = 288606

(b)

Figure 8 Continued

8 Journal of Electrical and Computer Engineering

MAOMP s = 5 PSNR = 163367

(c)

MAOMP s = 10 PSNR = 116794

(d)

Figure 8 e PSNR value with different initial step sizes in MAOMP (a) e original image (b) MAOMP when initial step size is 1 (c)MAOMP when initial step size is 5 (d) MAOMP when initial step size is 10

Original image

(a)

Proposed s = 1 PSNR = 291934

(b)

Proposed s = 5 PSNR = 292986

(c)

Proposed s = 10 PSNR = 298454

(d)

Figure 9 e PSNR value with different initial step sizes in the proposed method (a) e original image (b) the proposed method wheninitial step size is 1 (c) the proposed method when initial step size is 5 (d) the proposed method when initial step size is 10

Journal of Electrical and Computer Engineering 9

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 9: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

MAOMP s = 5 PSNR = 163367

(c)

MAOMP s = 10 PSNR = 116794

(d)

Figure 8 e PSNR value with different initial step sizes in MAOMP (a) e original image (b) MAOMP when initial step size is 1 (c)MAOMP when initial step size is 5 (d) MAOMP when initial step size is 10

Original image

(a)

Proposed s = 1 PSNR = 291934

(b)

Proposed s = 5 PSNR = 292986

(c)

Proposed s = 10 PSNR = 298454

(d)

Figure 9 e PSNR value with different initial step sizes in the proposed method (a) e original image (b) the proposed method wheninitial step size is 1 (c) the proposed method when initial step size is 5 (d) the proposed method when initial step size is 10

Journal of Electrical and Computer Engineering 9

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 10: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

for larger initial step size and reduce error erefore theproposed method has better performance in terms of errorand stability than SAMP and MAOMP algorithm for thetwo-dimensional image based on analysis of Figures 7 to 9

And can be seen from Figures 7 to 9 when the initial stepsize is 1 SAMP and MAOMP have better reconstructedresult erefore we select s 1 in the following experi-ments Figure 10 shows the PSNR value with differentsampling rates In this experiment we choose sampling rateof 03 04 05 06 07 and 08 and select the initial step sizeas 1 e test image is Lena with 256 times 256 size e otherexperimental conditions are the same as the previous ex-periment As can be seen from Figure 8 as the sampling rateincreases the PSNR value becomes larger Compared withthe other two algorithms the PSNR value of our proposedmethod is the highest is shows that the proposed methodhas smaller error in reconstructing images

Figure 11 shows the recovery times of the three algo-rithms with different sampling rates When the samplingrate is 03 the three algorithms have almost the same re-covery time With the sampling rate increasing recoverytime becomes longer Our proposed method still consumes

less time than the other methods for different sampling ratesWhen the sampling rate is 08 the proposed method runs1028 seconds shorter than SAMP and 446 seconds shorterthan MAOMP is proves that the proposed method hasbetter performance than the other two algorithms in termsof convergence speed Based on the above analysis theproposed method has smaller error better stability andfaster convergence speed than the SAMP and MAOMPalgorithm

6 Conclusions

In this paper we proposed a generalized sparsity adaptivematching pursuit algorithm with variable step size isalgorithm uses the idea of generalized atom selection tochoose more atoms at the beginning of the iteration and thesignal estimation solution is more accurate by backtrackingRegarding variable step size an idea of variable step size of anonlinear function is proposed is can make the step sizelarge at the beginning which is used to speed up the con-vergence of the algorithmen the step length is reduced to1 so the sparsity estimation value is more accurate therebyimproving the reconstruction accuracy and reducing therunning time of the algorithm

Simulation results demonstrate that our proposedmethod has a better reconstruction performance comparedwith SAMP algorithm and MAOMP algorithm For one-dimensional Gaussian signal among the different mea-surement values and different sparsity levels the recon-struction probability of our proposedmethod is the best andthe signal can be reconstructed at low observation or highsparsity For two-dimensional image our proposed methodhas better reconstruction quality which is measured byPSNR Compared to MAOMP and SAMP our proposedmethod has faster convergence speed Moreover as theinitial step size increases our proposed method can stillreconstruct images with high quality In a word our pro-posed method is better than similar algorithms in terms ofconvergence speed and reconstruction accuracy

Data Availability

All the data used for the simulations can be obtained fromthe corresponding author upon request

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported by the National Natural ScienceFoundation of China (61271115) and the Science andTechnology Innovation and Entrepreneurship Talent Cul-tivation Program of Jilin (20190104124)

References

[1] D L Donoho ldquoCompressed sensingrdquo IEEE Transactions onInformation 6eory vol 52 no 4 pp 1289ndash1306 2006

03 04 05 06 07 0820

22

24

26

28

30

32

34

Sampling rate

PSN

R (d

B)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 10 e PSNR value with different sampling rates

03 04 05 06 07 085

10

15

20

25

30

35

40

Sampling rate

Reco

very

tim

e (s)

Proposed s = 1MAOMP s = 1SAMP s = 1

Figure 11 e recovery time with different sampling rates

10 Journal of Electrical and Computer Engineering

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11

Page 11: ImprovedGeneralizedSparsityAdaptiveMatchingPursuit ...downloads.hindawi.com/journals/jece/2020/2782149.pdfMinistry of Education (Northeast Electric Power University), Jilin, China

[2] G Yang S Yu H Dong et al ldquoDAGAN deep de-aliasinggenerative adversarial networks for fast compressed sensingMRI reconstructionrdquo IEEE Transactions on Medical Imagingvol 37 no 6 pp 1310ndash1321 2018

[3] L Yuanqi S Renjie W Yongli and B Long ldquoDesign of Real-time communication system for portable smart glasses basedon Raspberry PIrdquo Journal of Northeast Electric Power Uni-versity vol 39 no 4 pp 81ndash85 2019

[4] B Li F Liu C Zhou and Y Lv ldquoPhase error correction forapproximated observation-based compressed sensing radarimagingrdquo Sensors vol 17 no 31 pp 1ndash21 2017

[5] L Changyin S Renjie H Yanquan and Z Yandong ldquoDesignand implementation of high voltage transmission equipmentauxiliary management systemrdquo Journal of Northeast ElectricPower University vol 39 no 4 pp 86ndash90 2019

[6] S Osher and Y Li ldquoldquoCoordinate descent optimization forl1minimization with application to compressed sensingrdquo agreedy algorithmrdquo Inverse Problems and Imaging (IPI) vol 3no 3 pp 487ndash503 2017

[7] H Rauhut R Schneiderv and Stojanac ldquoLow rank tensorrecovery via iterative hard thresholdingrdquo Linear Algebra andIts Applications vol 523 pp 220ndash262 2017

[8] Z Dong and W Zhu ldquoHomotopy methods based on $l_0$-norm for compressed sensingrdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 29 no 4 pp 1132ndash11462018

[9] J Wen J Weng C Tong C Ren and Z Zhou ldquoSparse signalrecovery with minimization of 1-norm minus 2-normrdquo IEEETransactions on Vehicular Technology vol 68 no 7pp 6847ndash6854 2019

[10] J Wen Z Zhou J Wang X Tang and Q Mo ldquoA sharpcondition for exact support recovery with orthogonalmatching pursuitrdquo IEEE Transactions on Signal Processingvol 65 no 6 pp 1370ndash1382 2017

[11] Y Liao X Zhou X Shen and G Hong ldquoA channel esti-mation method based on improved regularized orthogonalmatching pursuit for MIMO-OFDM systemsrdquo Acta Elec-tronica Sinica vol 45 no 12 pp 2848ndash2854 2017

[12] D Park ldquoImproved sufficient condition for performanceguarantee in generalized orthogonal matching pursuitrdquo IEEESignal Processing Letters vol 24 no 9 pp 1308ndash1312 2017

[13] F Huang J Tao Y Xiang P Liu L Dong and L WangldquoParallel compressive sampling matching pursuit algorithmfor compressed sensing signal reconstruction with OpenCLrdquoJournal of Systems Architecture vol 72 no 1 pp 51ndash60 2017

[14] P Goyal and B Singh ldquoSubspace pursuit for sparse signalreconstruction in wireless sensor networksrdquo Procedia Com-puter Science vol 125 pp 228ndash233 2018

[15] A M Huang G Gui Q Wan and A Mehbodniya ldquoA blockorthogonal matching pursuit algorithm based on sensingdictionaryrdquordquo International Journal of the Physical Sciencesvol 6 no 5 pp 992ndash999 2011

[16] H Li and J Wen ldquoA new analysis for support recovery withblock orthogonal matching pursuitrdquo IEEE Signal ProcessingLetters vol 26 no 2 pp 247ndash251 2018

[17] J Wen Z Zhou Z Liu M-J Lai and X Tang ldquoSharpsufficient conditions for stable recovery of block sparse signalsby block orthogonal matching pursuitrdquo Applied and Com-putational Harmonic Analysis vol 47 no 3 pp 948ndash9742019

[18] R Shoitan Z Nossair I I Ibrahim and A Tobal ldquoImprovingthe reconstruction efficiency of sparsity adaptive matchingpursuit based on the Wilkinson matrixrdquo Frontiers of

Information Technology amp Electronic Engineering vol 19no 4 pp 503ndash512 2018

[19] B Li Y Sun G Li et al ldquoGesture recognition based onmodified adaptive orthogonal matching pursuit algorithmrdquoCluster Computing vol 3 pp 1ndash10 2017

Journal of Electrical and Computer Engineering 11