23
Progress In Electromagnetics Research B, Vol. 88, 73–95, 2020 A Beamformer Design Based on Fibonacci Branch Search Tianbao Dong * , Haichuan Zhang, and Fangling Zeng Abstract—An approach towards beamforming for a uniform linear array (ULA) based on a novel optimization algorithm, designated as Fibonacci branch search (FBS) is presented in this paper. The proposed FBS search strategy was inspired from Fibonacci sequence principle and uses a fundamental branch structure and interactive searching rules to obtain the global optimal solution in the search space. The structure of FBS is established by two types of multidimensional points on the basis of shortening fraction formed by the Fibonacci sequence, and in this mode, interactive global searching and local optimization rules are implemented alternately to reach global optima, avoiding stagnating in local optimum. At the same time, the rigorous mathematical proof for the accessibility and convergence of FBS towards the global optimum is presented to further verify the validity of our theory and support our claim. Taking advantage of the global search ability and high convergence rate of this technique, a robust adaptive beamformer technique is also constructed here by FBS as a real time implementation to improve the beamforming performance by preventing the loss of optimal trajectory. The performance of the FBS is compared with five typical heuristic optimization algorithms, and the reported simulation results demonstrate the superiority of the proposed FBS algorithm in locating the optimal solution with higher precision and reveal the further improvement in adaptive beamforming performance. 1. INTRODUCTION Since the conception of the adaptive arrays came into usage in aerospace and military applications via the employment of electronically steered beamformer, the ultilized technique adaptive beamforming (ABF) has drawn significant attention in various fields [1–4]. ABF possesses the potential ability to optimize the radiation pattern in real time, which obtains a larger output signal to interference-plus- noise ratio (SINR) by steering the main lobe of radiation toward a desired signal while placing respective nulls toward several interference signals [5]. Classical adaptive beamforming methods used to extract the excitation weights are based on two main criterions: maximum signal-to-noise ratio (MSINR) and minimum mean square error (MMSE) [6]. Minimum variance distortionless response (MVDR) is one of the typical adaptive beamforming approaches on the basis of MSINR criterion. The design of this beamformer involves minimizing the output power subject to the unit gain constraint to desired signal [7–9]. Although the MVDR beamformer is capable of suppressing the interference and improving system reliability, the weights computed by MVDR are not able to form deep nulls towards the interference source in various interference scenarios on account of the characteristics of this technique. Closed-form (CF) design or the classic algorithm exists to easily find the optimal solution for a actually convex problem, such as the algorithm known as the sample matrix inversion (SMI) technique; however, the weight calculation of this solving process takes into account the interference correlation matrix which makes this beamforming process very difficult, time consuming, and sometimes, in ABF application, is unmanageable. Besides, the closed-form design for the convex ABF model problem always needs to compute the matrix inverse Received 31 March 2020, Accepted 1 July 2020, Scheduled 3 September 2020 * Corresponding author: Tianbao Dong (dtb [email protected]). The authors are with the School of Electronic Countermeasures, National University of Defense Technology, China.

A Beamformer Design Based on Fibonacci Branch Search · Tianbao Dong*, Haichuan Zhang, and Fangling Zeng Abstract—An approach towards beamforming for a uniform linear array (ULA)

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

  • Progress In Electromagnetics Research B, Vol. 88, 73–95, 2020

    A Beamformer Design Based on Fibonacci Branch Search

    Tianbao Dong*, Haichuan Zhang, and Fangling Zeng

    Abstract—An approach towards beamforming for a uniform linear array (ULA) based on a noveloptimization algorithm, designated as Fibonacci branch search (FBS) is presented in this paper. Theproposed FBS search strategy was inspired from Fibonacci sequence principle and uses a fundamentalbranch structure and interactive searching rules to obtain the global optimal solution in the searchspace. The structure of FBS is established by two types of multidimensional points on the basis ofshortening fraction formed by the Fibonacci sequence, and in this mode, interactive global searchingand local optimization rules are implemented alternately to reach global optima, avoiding stagnating inlocal optimum. At the same time, the rigorous mathematical proof for the accessibility and convergenceof FBS towards the global optimum is presented to further verify the validity of our theory and supportour claim. Taking advantage of the global search ability and high convergence rate of this technique, arobust adaptive beamformer technique is also constructed here by FBS as a real time implementation toimprove the beamforming performance by preventing the loss of optimal trajectory. The performanceof the FBS is compared with five typical heuristic optimization algorithms, and the reported simulationresults demonstrate the superiority of the proposed FBS algorithm in locating the optimal solution withhigher precision and reveal the further improvement in adaptive beamforming performance.

    1. INTRODUCTION

    Since the conception of the adaptive arrays came into usage in aerospace and military applications viathe employment of electronically steered beamformer, the ultilized technique adaptive beamforming(ABF) has drawn significant attention in various fields [1–4]. ABF possesses the potential ability tooptimize the radiation pattern in real time, which obtains a larger output signal to interference-plus-noise ratio (SINR) by steering the main lobe of radiation toward a desired signal while placing respectivenulls toward several interference signals [5].

    Classical adaptive beamforming methods used to extract the excitation weights are based ontwo main criterions: maximum signal-to-noise ratio (MSINR) and minimum mean square error(MMSE) [6]. Minimum variance distortionless response (MVDR) is one of the typical adaptivebeamforming approaches on the basis of MSINR criterion. The design of this beamformer involvesminimizing the output power subject to the unit gain constraint to desired signal [7–9]. Although theMVDR beamformer is capable of suppressing the interference and improving system reliability, theweights computed by MVDR are not able to form deep nulls towards the interference source in variousinterference scenarios on account of the characteristics of this technique. Closed-form (CF) design orthe classic algorithm exists to easily find the optimal solution for a actually convex problem, such as thealgorithm known as the sample matrix inversion (SMI) technique; however, the weight calculation of thissolving process takes into account the interference correlation matrix which makes this beamformingprocess very difficult, time consuming, and sometimes, in ABF application, is unmanageable. Besides,the closed-form design for the convex ABF model problem always needs to compute the matrix inverse

    Received 31 March 2020, Accepted 1 July 2020, Scheduled 3 September 2020* Corresponding author: Tianbao Dong (dtb [email protected]).The authors are with the School of Electronic Countermeasures, National University of Defense Technology, China.

  • 74 Dong, Zhang, and Zeng

    process which introduces high computational complexity especially for large number of array elementsand not suitable for certain real-time beamforming situations [10–12]. In addition, the classic closed-form design algorithms have hardware component device limitations. It is not possible to change thehardware of the filter for signal processing or to change the design of the antenna based on the increasednumber of components, but this is necessary. Another criterion for computing the array weights isto minimize the MMSE. One of the most widely used MMSE-based adaptive algorithm is the leastmean square (LMS) approach, and this method needs a training sequence of signal of interest (SOI)to adaptively adjust the complex weights and minimize the difference between the array output andthe desired signal for forming the optimum array pattern. Consequently, the inherent shortcomings ofabove mentioned derivative-based and gradient-based ABF methods have compelled many researchersto explore meta-heuristics (MH) methods for overcoming these difficulties.

    The main advantage of the evolutionary heuristics algorithms over the classical derived approachesin antenna systems is that they have no requirements for extra iterative derivations or computationallyextensive routines in objective functions of ABF model, and some of the evolutionary algorithmshave been dedicated to beamformer implementations for their ability to search the global optimum.Approaches such as genetic algorithms (GA), particle swarm optimization (PSO), and other modifiedtechniques are a set of optimization algorithms that have been suggested in the past decades to solvea variety of ABF problems [13, 14]. Many researches have shown that the excitation weights extractedby these optimization techniques defined by specific criteria can be used to place a maximum beamand null in an array pattern in specified locations, and they also require relatively lower mathematicalcomplexity than derivative-based or iterative-based ABF methods. However, there still exist weaknessesand limitations in the application of ABF with these techniques. Most iterative evolutionary methodsare highly dependent on starting points in the case of large number of solution variables [15]. Yet theweights of ABF are regularly associated with a large number of array elements, and the excitation ofarray elements is complex, i.e., having both amplitude and phase, hence the size of the ABF solutionspace cannot be very small, and in case, these evolutionary algorithms are not really applicable tobeamforming. Besides, the classic optimization methods are prone to get trapped in local minimaand not reach the global optimum when solving complex multimodal optimization problems of arrayweight extraction, resulting in a suboptimum beamforming performance [16, 17]. In addition, most ofMH algorithms are population-based optimization techniques which require long execution times toconverge, specifically when solving large scale complex ABF engineering problems, and the complexityrequired for implementing the algorithms would also result in huge cost and consumption of hardwareresources.

    In consideration of the above-mentioned studies, we propose a novel interactive-based randomiterative search strategy, called FBS in this paper to deal with complicated optimization problems ofABF. The motivation for considering the FBS with application to such a robust adaptive beamformeris that we expect to be able to form deep nulls towards the interference source in various interferencescenarios without the requirement of apriori information such as the training sequence of signal of interestand avoid complex mathematical processing such as the inverse calculation of the interference correlationmatrix compared to the derivative-based approaches, and with respect to a traditional meta-heuristicalgorithms applied to beamformer, it is anticipated that the FBS based on beamformer could obtainthe near optimal nulling level performance featuring a high global optimization resolution capabilityand prevents the loss of optimal search trajectory low computational load possesssing to improve theconvergence performance with the design flexibility of the algorithm in the framework of ABF.

    The concept of the proposed FBS is defined from two aspects: The first one is the generationprinciple of the Fibonacci branch architecture. The basis for building a branch structure in FBS is toestablish search points and the shortening fraction based on the Fibonacci sequence to generate a setof optimization elements composed of two types of search points. The optimization endpoints searchfor the optimal solution according to the branch’s growth path pattern. The second view of the FBSconcept is that the construction of interactive iteration applies rules to the calculation of optimizationelements. The iterative rules are composed of global searching and local optimization, which are the twophases necessary to update the optimization elements. Global tentative points and local searching pointsare formulated in the phase of two interaction processes, and the points with the best fitness convergetowards the global optimum in searching space. At the same time, computer memory can be fully used

  • Progress In Electromagnetics Research B, Vol. 88, 2020 75

    to record the optimizing process during the interaction optimization course. Global randomness is oneof the important characteristics of FBS, and this mechanism is implemented on those points that arenot easy to fall into the local optimum and not able to find a better solution. The novelty of thispaper lies in the fact that we design the Fibonacci branch optimization structure and propose the novelglobal searching and local optimizing interaction iterative technique. In addition, the Fibonacci branchsearch algorithm proposed here has been applied to antenna array beamforming in several cases andin comparison with other evolutionary optimization-based techniques on several test functions and therobust ABF.

    The reminder of this paper is organized as follows. Adaptive beamforming model incorporation withmetaheuristic algorithm is described in Section 2. Section 3 presents the proposed FBS optimizationalgorithm and the accessibility convergence proof of FBS. The validation of the proposed FBS viabenchmark functions and the simulation results are reported in Section 4. Section 5 gives the conclusion.

    2. ADAPTIVE BEAMFORMING MODEL INCORPORATION WITHMETAHEURISTIC ALGORITHM

    Consider a uniform linear antenna array of M omnidirectional array elements employed in adaptivebeamforming receiver, one desired signal and Q uncorrelated interferences which impinge on the arrayat the kth snapshot can be expressed by [18]

    x (k) = s (k) a (θd) +Q∑

    i=1

    ii (k) a (θi) + n (k) (1)

    where s(k) and ii(k) are the desired global navigation satellite systems (GNSS) signal and the ithinterference, and n(k) denotes the complex vector of sensor noise. a(θd) and a(θi) represent M × 1steering vectors of s(k) and ii(k) as given by

    a (θd) =[1, e−j2π

    dcλ

    cos θd , · · · , e−j2π dcλ (M−1) cos θd]T

    a (θi) =[1, e−j2π

    dcλ

    cos θi , · · · , e−j2π dcλ (M−1) cos θi]T (2)

    where θd and θi denote the direction of the desired signal and the ith interference; dc = λ2 is theinter-element spacing; λ is the wavelength of GNSS carrier; T is the transpose operation.

    The array beamformer output can be written as

    y (k) = wHx (k) (3)

    where w is the complex beamforming weight vector of the antenna array, and H stands for the Hermitiantranspose.

    The schematic structure of a linear adaptive antenna array processor on the front end of the receiveris shown in Figure 1. Solid incident arrow represents the GNSS signal, and dotted arrow represents theinterference.

    In the above model, the adaptive beamformer studied in this paper aims to calculate a complexweight vector that satisfies the requirement of achieving a deep nulling level for interference to maximizethe output SINR, and is usually a low nulling level for multiple interference sources. The non-globaloptimal weight vector is one of the main disadvantages of MVDR beamforming technology. Therefore,in this section, the proposed FBS algorithm incorporated into the beamforming model will extract theoptimal array excitation weights to achieve the improved performance of the technique.

    Altering the radiation pattern of an antenna array by adjusting the weights is an inherently multi-objective problem, since multiple sets of agent weight vectors w with amplitude and phase to make deepnulls toward the interference place and steer the radiation beam toward the desired user to achieve themaximum SINR must be satisfied. The optimization method to the designed ABF aims at finding thenear-global minimum of a fit mathematical function called fitness function; therefore, the best weightvector is determined according to the fitness value obtained from the object function defined based on

  • 76 Dong, Zhang, and Zeng

    Figure 1. The schematic of the linear antenna array processor.

    the SINR. Therefore, a typical fitness function constructed from the perspective of SINR for calculatingcomplex excitation weights using FBS can be appropriately expressed in the following form:

    Fitness Function (w) =Pd

    Q∑i=1

    Pi + PN

    (4)

    where

    Pd =12E

    [∣∣wTxd∣∣2] (5)Pi =

    12E

    [∣∣wTxi∣∣2] (6)are, respectively, the power of the desired signal and the power corresponding to the ith interference,and PN is the noise power. xd and xi denote the desired signal and interference component of thereceived signal in Equation (1), and E is the expectation operator.

    Then, the design objective function can be properly stated in the following form

    Fitness Function (w) =wTRdw

    wTQ∑

    i=1

    Riw + σ2noisewTw

    (7)

    where Ri and Rd are the covariance matrix of the ith interference and the desired one. The noisevariance is calculated from the value of signal-to-noise ratio (SNR) in dB as follows:

    σ2noise = 10−SNR/10 (8)

    By maximizing Eq. (7), the optimal excitation weight corresponding to the minimum level of theinterference sources but with the desired user gain of the beamformer can be achieved by the proposedFBS algorithm, and the optimum performance of the weight corresponding to the maximum SINR canbe evaluated based on the best fitness account. Next section of the paper provides a brief descriptionof the implementation steps for extracting the weight using Fibonacci branch search method.

  • Progress In Electromagnetics Research B, Vol. 88, 2020 77

    3. PROPOSED FBS ALGORITHM

    Since the inception of Fibonacci optimization strategy, the effectiveness of the algorithm for solving aset of nonlinear benchmark functions has been proven in one dimension space; however, this algorithmis seldom applied to the search optimization problem of multidimensional space for the properties andstructure of itself, and very few of its variants have been implemented for the beamforming applicationsin open reported literatures. In this section, the conventional Fibonacci sequence method will be brieflyintroduced, and the inspired FBS will be explained in principle and provide an in-depth insight intothis technique.

    3.1. The Standard Principle of Fibonacci Sequence Method

    The famous Fibonacci sequence was proposed initially by Etminaniesfahani et al. [19], and the recurrenceformula of the general sequence term is given by [20]{

    F1 = F2 = 1Fj = Fj−1 + Fj−2, j ≥ 3 (9)

    where Fj represents the jth general term of Fibonacci sequence.Fibonacci sequence optimization method makes the tentative optimization points in the defined

    interval converge to the optimal solution by compressing the search interval proportionally based onFibonacci sequence term, and it has been perceived as one of the most effective strategies to solveone-dimension unimodal problem [21]. Let us investigate below how the optimization method using theFibonacci sequence works for a unimodal continuous function in an interval for a minimization problem.Supposing a unimodal f(x) function which is defined on the intervals [A,B]. Initially, the techniquestarts a choice for two feasible points x1 and x̃1, x1 < x̃1 in the given range for the first iteration.Then, it is necessary to reduce the initial box of range to a sufficiently small box region including theminimum solution of f(x) (through an iteration process) for the interval that can be narrowed downprovided that the function values are known at two different points in the range. The implementationof the classical Fibonacci serial optimization algorithm can be found in [22], and we will not go intofurther details here.

    Let xp and x̃p denote the different random selected new points over the range of [Ap, Bp] to bechosen for shortening the length of the interval at the pth iteration involving optimal point, xp < x̃p.Hence for each p = 1, 2, ..., N , N represents the maximum number of iterations, and Fibonacci algorithmcan be executed as following proceedings.

    3.2. Fibonacci Branch Structure and FBS Optimization Algorithm

    The standard Fibonacci strategy cannot efficiently solve multi-variate problems and reliably performthe optimum fitness evaluation of multimodal functions [23]. While this is in contradiction to the classicheuristic optimization algorithms, the FBS algorithm, proposed in this paper, is used to overcome thesedefects while avoiding a loss of the optimal search trajectories by using the searching elements with adendritic branch structure and interactive searching optimization rules.

    The basic structure of FBS expanded to the multi-dimensional space D can be illustrated inFigure 2, where XA, XB , and XC are the vectors in D dimensional Euclidean space. XA and XBrepresent the endpoints of the search element satisfying the optimization rule, and XC denotes thesegmentation points which can be determined from the searching rule. A certain proportion of thevectors can be constructed as follows

    ‖XC − XA‖‖XB − XA‖ =

    ‖XB − XC‖‖XC − XA‖ =

    FpFp+1

    (10)

    Figure 2. Basic structure of the proposed FBS.

  • 78 Dong, Zhang, and Zeng

    where Fp is the pth Fibonacci number.Considering that the multimodal function with multiple variables f(X) is to be minimized in search

    space, the fitness function value calculated by the endpoints in the structure should be evaluated asf (XA) < f (XB) (11)

    Then, the coordinate computing formula of segmentation point XC can be written as

    XC = XA +Fp

    Fp+1(XB − XA) (12)

    The FBS optimization algorithm introduced in this section is based on a framework that is builtaround the concept of endpoints and segmentation points in basic structure. Although a similarrelevant algorithm theory has been studied in [24], it did not elaborate on the steps and principlesof the algorithm. In this section, the principle part of FBS is well explained, and the details of theimplementation content are fully described including the implantation procedure being regularized, andit is successfully applied to the adaptive beamforming field.

    Combining with the basic structure, the process of searching for global optimum solution whichcan also be regarded as establishing a search element in FBS is divided into two stages, the localoptimization process and the global searching process, which are the corresponding two interactiverules. Let G denote the point sets of the object function searching for in current processing phase, andset |G|num = Fp, p = 1, 2, · · · , N . | · |num represents the total number of points in a set, and N is thedepth of the Fibonacci branch. The fitness value of the endpoints XA and XB are initialized usingthe two corresponding interactive optimization rules, then the segmentation points XC can be obtainedfrom Equation (12). By comparison of the fitness values of all the points in the structure, we can get theresults that the best fitness value accordingly corresponds to the closest optimal solution. To the nextoptimization phase, the optimal point with the best fitness value is provided in the forefront position ofthe set, and the points corresponding to the suboptimal fitness are arranged below the optimal point inaccordance with the order from the best to worst. Throughout the operations above, the points set Gcan be updated in every optimizing phase to attain the aim of growing the Fibonacci branch and globaloptimization in search space simultaneously.

    The two interactive searching rules of FBS in the optimization stage are summarized as follows:Rule One: Let us consider the endpoints XA and XB in the structure, which are defined by

    {XA} = Gp = {Xq|q = [1, FP ]} (13)

    {XB} =⎧⎨⎩X|X ∈

    D∏f=1

    [Xflb,X

    fub

    ]U⎫⎬⎭ (14)where Gp is the search points space set in the pth iteration; Xq are the points in set Gp, and this searchpoint was randomly selected in the whole search space with random search mechanism. q is the sequencenumber lying on the interval between 1 and the pth Fibonacci number, and it represents the Fibonaccilayer used for searching the global optima. XA take all the points from Gp of the pth iteration. Theother endpoints XB take random points in search space, and the number of XB is equal to FP . D isthe dimension of the points, and Xfub are the upper and lower bounds of the search points. Given that∀X ∈ {XB}, component x of vector X is a random variable that satisfies a uniform distribution overthe interval [Xlb,Xub]

    U , where the usual character U stands for the uniform distribution of the variable,and the probability distribution of the component can be written as

    P (x) = U (Xlb,Xub) =1

    Xub − Xlb (15)Using the given endpoints XA and XB , we can determine the segmentation points XS1 in the firstglobal search stage by Equation (12).

    Rule Two:Suppose that Xbest is the optimal solution corresponding to the best fitness value of the search

    space in the current iteration containing the endpoints and the segmentation points generated from ruleone, as given by

    Xbest = BEST (GP ) (16)

  • Progress In Electromagnetics Research B, Vol. 88, 2020 79

    (a) (b)

    Figure 3. The process of building Fibonacci branch for the global optimization. (a) The first globalsearching stage. (b) The second local optimization stage.

    where BEST (·) means the best solution of the set at the pth iteration.Then, we set the endpoints XA = Xbest and have that:

    f (XA) = min {f (Xq) , q = [1, FP ]} (17)XB = {Xq|Xq ∈ GP ∧ Xq �= XA} (18)

    Using the calculation formula of the segmentation point, the segmentation point XS2 of the secondlocal optimization stage can be determined according to the endpoints defined in Eqs. (17) and (18),and the segment search point is computed by Equation (12).

    From the two interactive searching rules mentioned above, new points including endpoints XA, XBand segmentation points XS1, XS2 are generated in the two optimization stages, and the total numberof points is 3FP . Evaluating the cost functions in new points determines their fitness, and all thesepoints are sorted from the best to the worst based on their fitness values. The population size of thesearch points is chosen as the Fibonacci series, thus, the top best FP+1 sets of these points are saved,and the remaining 3FP − FP+1 points need to be dropped. After this procedure, the sets of the searchspace in the current pth iteration are renewed from the saved points, e.g., the saved points form a newset GP+1 in search space for the next iteration.

    The two stages of building Fibonacci branch for the global optimization in space is shown inFigure 3.

    As can be seen from Figure 3, the depth of the Fibonacci branch layer illustrated in the figureis initialized as expected, and the number of points in every branch layer remains in the sequence ofFibonacci number. The white dashed circle in the figure represents the search points set of the previousiteration, and the black solid circles denote the endpoints XA in the current iteration. The globalrandom endpoints XB are represented in grey solid circles. Figure 3(a) shows the first global searchingstage of the global optimization process, and the segmentation points XS1 which are represented by solidwhite line circles are constructed on the basis of global random points and XA. As shown in Figure 3(b),the second local optimization stage combines the remaining end points of the best adaptation pointsXA and XB in the search space of the current iteration, and a new segmentation point XS2 can beobtained through iteration rules. The fitness values of XA, XB , XS1, and XS2 are evaluated, and thebest found FP+1 solutions with optimum object function evaluations need to be saved.

    Figure 4 shows the flowchart of the general procedures for the specific implementation of FBS.

    3.3. Implementation Flowchart of Fibonacci Branch Search to Adaptive Beamforming

    In this subsection, in light of the results described in detail previously, an optimization scheme of ABFproblem combining with the novel FBS is presented to enhance the maximum power for target signaland generate deep nulls for interferences. The basic idea of the design of such an adaptive beamformeris to utilize the global searching and local convergence capability of the novel efficient search algorithmto reduce the local minimum problem of the solution to weight vector for getting the maximum SINR.

  • 80 Dong, Zhang, and Zeng

    Figure 4. Flowchart of Fibonacci branch search optimization method.

    The general procedures for the implementation of Fibonacci branch search method with application toadaptive beamforming are presented in Figure 5, in which the key steps are briefly described below.

    (a) Choose the depth R of Fibonacci branch to determine the population FR of the top branchlayer, and set the maximum number of iterations of the optimization process.

    (b) Initialize the population of the first branch layer Fj , and determine the dimension of the weightvectors acting as search points in space according to the element number of ULA. Also, define theamplitude search space of the weight within [0, 1], and limit the range of weight phase to [−π, π].

    (c) Assign the values to the amplitude and phase of the weight elements inside the search space forconstructing the initial population of the weight vector set Gw, and the weight element in the vectorsets Gw constructed by the amplitude and phase can be expressed as follows:

    wjd = rand [0, 1] · ej·rand[−π,π] (19)where the generated weight wjd represents the dth dimension of the jth individual in the population,d ∈ [1,M(dimension of the search space)], j ∈ [1, Fj(population of the vectors)], and rand[·] denotesthe random value generation in the range.

  • Progress In Electromagnetics Research B, Vol. 88, 2020 81

    (a) (b)

    Figure 5. Three dimensional and contour plots for the Langermann function. (a) Three dimensionalof Langermann function. (b) Contour plots of Langermann function.

    (d) Take the random amplitude and phase values in search space for generating the Fj populationof the weight vectors wB which act as the global search points.

    (e) Set the vector elements of Gw and wB as the endpoints in Equation (12), and compute thefirst set of weights wS1 according to the iterative rule one.

    (f) Calculate the fitness in the object function of Eq. (18) using Gw, wB , and wS1, then give theevaluation to the values to find the best weight vector wbest with maximum fitness value among all thevectors in space.

    (g) Generate the second set of weight vectors based on the best weight vector wbest and the otherweights from the weight space set using Equation (12) in iterative rule two.

    (h) Select the top Fj+1 best weight vectors depending on the maximum fitness value in theoptimization process, and these best weights are selected to compose the new population of the setGw.

    (i) Check the termination criteria Gw = FR. If the termination criteria are not satisfied, thenincrement j, and go to step (d). Otherwise, stop.

    (j) If the maximum number of iterations is not reached, repeat the algorithm from step (d), or elsereport the output results and terminate.

    3.4. Accessibility and Convergence Proof of FBS towards the Global Optimum forMultimodal Functions

    In this section, accessibility and convergence proof of Fibonacci tree optimization algorithm are analyzedand investigated based on global randomness of Fibonacci branch structure. Additional rigorousmathematical proofs were implemented to prove and clarify that the proposed FBS algorithm canachieve global optimality and ensure that FBS converges towards global optimality.

    3.4.1. Accessibility Investigation of FBS Algorithm

    Mathematical proof one: The solution set in the domain of objective function can be reached viathe accessible set in search space by FBS.

    According to the principle of interactive searching rules in Fibonacci branch search algorithm, afterenough long iterations n < +∞, the endpoints XB are generated by rule one in the basic structure ofFibonacci branch obeying the uniform distribution between the space defined by the upper and lowerbounds, i.e., ∀X ∈ XB , X = (Xd)D×1 where D is the dimension of the vector, and its probabilitydistribution is P (Xd) = U(Xmin,Xmax); therefore, Xd satisfies the following relation in the domain of

  • 82 Dong, Zhang, and Zeng

    the target function ∫ D︷︸︸︷· · · ∫ 1Xmax − Xmin dXd = 1 (20)

    From the above proof process, ∀X ∈ XB obey X ∈ B, namely, the optimization set B is the accessibleset of XB by FBS algorithm.

    Mathematical proof two: The global optima of objective function in search space are accessiblefor FBS algorithm.

    Suppose that the global optima X∗ in search space are in the definition domain B of objectivefunction, then, it can be known that according to the mathematical proof one, the solution X∗ is withinthe accessible set of B. Then, consider that the probability of the uniformly distributed in search spacefor random search points is P , and make a hypothesis that the local optimal solution obtained by FBSis XL. Therefore, after long enough iterations n < +∞, the probability of falling into a local optimalsolution for FBS is P ∗ ≤ ∏ni=1(1 − P ) on the basis of proof one, then we can obtain that

    limn→+∞ P ∗ ≤ limn→+∞n∏

    i=1

    (1 − P ) = 0 (21)

    From this point of view, the global optima of objective function in search space are accessible for FBSalgorithm.

    Consequently, it can be demonstrated from the above two mathematical proofs that the globaloptima in the domain of objective function are reachable for the accessible solution of FBS in searchspace.

    3.4.2. Convergence Analysis of FBS Algorithm

    Suppose that long enough iterations n < +∞ are implemented by FBS to search for the global optimaof objective function, and the current search element set G of the basic structure obtained by FBScan form the increasingly optimized search point sequences {xT } = {xt |xbest ∈ St }, t = 1, 2, · · · , n.xbest is the optimal solution in the current set of iterations. Then, we establish a probabilityP (t) = P (|X∗ − xt| ≥ ς), the chance that FBS algorithm converges to the global optimal solution,and ς represents a fixed variable with a small value. According to the generation rules of Fibonaccibranch P (t) = PX(t)+Pς(t). PX(t) is the probability of generating uniformly distributed random pointsin the definition domain of Rule one, and Pς(t) is the probability of generating uniformly distributedrandom search points in a region defined by the radius parameter ς based on Rule two. Therefore, aftert iterations by FBS, the probability of the search points not falling into the region of ς spacing distancearound the global optima X∗ is P̃ (t) = P (|X∗ − xt| ≥ ς), then we have limt→+∞ P̃ (t) = 0, hence, thefollowing results can be obtained

    P (n < +∞) > P (t) = 1 − P̃ (t) , P (n < +∞) > 0 (22)Let t → +∞, and we can get

    P (n < +∞) = 1 (23)So FBS converging to the global optimal solution with probability 100% can be demonstrated andproven.

    From the above proof of Section 3.5.1 and Section 3.5.2, we can make a comprehensive conclusionthat the proposed FBS algorithm is effective, convergent, and accessible for the global optimal solutionof the objective function.

    4. SIMULATION RESULTS

    4.1. Verification of the Proposed FBS

    In order to validate and analyze the efficiency and effectiveness of the proposed FBS, the algorithm isverified from the following aspects in simulation experiments:

  • Progress In Electromagnetics Research B, Vol. 88, 2020 83

    (1) The accessibility to the global optimum of the proposed search algorithm for multimodalfunction with numerous local optima is revealed by the location history of the search points towardsthe optimal point.

    (2) The convergence of the FBS is proved and discussed by the presented gradient of the iterationcurves which manifest the convergence rate speed and the average fitness of the chosen benchmarkfunction.

    (3) The optimization precision of the solution and other relevant optimization assessment aspectsof the proposed algorithm are tested on eight representative standard benchmark functions, and theresults are compared with typical heuristic algorithms.

    The details of the parameter settings for every heuristic algorithm used in the experiments aregiven in Table 1.

    Table 1. Reference parameters of the algorithms briefly used in this study.

    Particle Swarm Optimization (PSO) Genetic Algorithm (GA) Comprehensive Learning PSO (CLPSO)

    Population size Np = 20 Population size Np = 20 Learning Probability 0.05 ∼ 0.5Cognitive ratio c1 = 2 Mutation probability Pm = 0.05 Population size Nc = 20

    Social coefficient C2 = 2 Cross probability Pc = 0.7 Cognitive ratio c1 = 2

    inertia weight 0.4 ∼ 0.9 Rate of chromosome elite Pe = 0.2 Social coefficient C2 = 2inertia weight 0.4 ∼ 0.9

    Differential Evolution (DE) Artificial Bee Colony (ABC) Fibonacci Branch search (FBS)

    Population size Np = 20 colony size Cs = 20 Nested branch depth 2

    scaling factor F = 0.6 onlooker bees percentage Lp = 50% Total branch depth 6

    Crossover rate CR = 0.8 scout bees Sb = 1 Search Space [Min, Max]

    All the experimental tests are implemented on Intel (R) Core (TM) i7-7700 HQ Core [email protected] GHz and 2.8 GB RAM, and all the meta-heuristics algorithms are coded and carried out in Matlab2017b version under the Windows10 Professional.

    4.1.1. The Location History of the Search Points in FBS for Langermann Function

    In this section, the global optimization ability of the proposed FBS is demonstrated by employingthe location history of the search points during optimization process for locating the global optimumsolution rather than trapping into local optimization of the benchmark example, with results comparedagainst metaheuristic PSO algorithm. The benchmark function chosen in this section is the Langermannfunction with several known local optimal points and one global optimum solution point which is takenfrom [25, 26] and is summarized in Table 2. As can be found in Table 2, two typical extreme pointsexist in the function. The extreme point 1 shown in the table is the global optimum solution, andthe extreme point 2 is the local suboptimal solution. The three-dimensional Langermann function andcontour plots are illustrated in Figure 5.

    Table 2. Langermann benchmark function.

    Extreme point Extreme point 1 Extreme point 1Evaluation in [25] global optimum solution local suboptimal solution

    Langermann function fL (2.003, 1.006) = −5.1612 fL (7, 9) = −3

    The performance of the proposed FBS in terms of the movement trajectory of the search pointsscattering around the best solutions and converge towards the optimal point in the search space forLangermann are illustrated in Figure 6(b). This figure shows that the FBS model is able to simulatethe position history of search points in three dimensional and trajectory contour plots over differentiterations. For the verification of the results, we compare our algorithm to PSO in the same manner

  • 84 Dong, Zhang, and Zeng

    (a)

    (b)

    50 iterations 200 iterations 500 iterations 1000 iterations

    50 iterations 200 iterations 500 iterations 1000 iterations

    Figure 6. Location history of the search points for the Langermann function over the course ofdifferent iterations. (a) Visualization results of the points location history in contour plots for PSO. (b)Visualization results of the points location history in contour plots for FBS.

    and provide the results in Figure 6(a). The initial positions of the search points in both FBS and PSOare set at the extreme point 2.

    As the results exhibited in Figure 6, the search points tend to explore the promising regions of thesearch space and cluster around the global optima eventually in multimodal Langermann pattern. Fromthe results depicted in Figure 6(a), we can know that as the number of iterations increases, the pointsof PSO algorithm gradually cluster around the extreme point 2 and proceed toward local optima, andalmost no particles enter the region near the global optimum extreme point 1, providing the furtherevidence that PSO inherently suffers from local optima entrapment and stagnation in the search space.Under the same conditions, it can be seen from the trajectories and 3D version of the search points asshown in Figure 6(b) that although the Langermann function is non-symmetric and multimodal withdifferent levels of peaks, finding its global optimum is challenging due to many local minima in thesearch space. Remarkably, FBS is extricated from the initial local optimum at extreme point 2 andjumps out of the trapped solution in a local optimum point assisted by global random searching. It isevident from the location history of the search points during the process of converging toward the globaloptima that the points grow towards the optimal point from the area of initialization, tending to scatteraround extreme points gradually and moving towards the best solutions in the search space in both 2Dand 3D spaces over the course of iteration. More than half of the agents have already approached theglobal optimum valley after the first 50 iterations and begin converging on the optimum. As iterationincreases, more agents aggregate at the extreme points and scatter around the extreme point, especiallyattracted intensively at the global optimum target region. Eventually, the search points find the globaloptimum and converge toward the global optima. This can be discussed and reasoned according tothe global randomness concepts introduced by the endpoints XB which is generated in rule one ofFBS. Furthermore, the convergence of FBS is guaranteed by the local exploitation optimization abilityemphasized in the other endpoints XA of the proposed algorithm. Since the global random points tendto move from a less fit universe to a more fit universe by global searching in space, the best universe issaved and moved to the next iteration. Consequently, these behaviors and abilities will assist the FBSnot to become trapped in local optima and converge towards the target point quickly in the iterationsof optimization.

    The above simulations and discussions demonstrate the effectiveness of the FBS algorithm in finding

  • Progress In Electromagnetics Research B, Vol. 88, 2020 85

    the global optimum in the search space, and the convergence performance and the rate of obtainingthe global optima of the proposed algorithm by employing a set of mathematical functions will beinvestigated in the next sections.

    4.1.2. Convergence Performance of the Multimodal Function

    To confirm the convergence behaviour of the proposed algorithm, in this subsection, we provide theconvergence curves that exhibit the objective fitness value of the typical benchmark functions obtainedby the best solutions so far in each iteration. A large set of complex mathematical benchmark functionsto be tested are listed in Table 3. These functions have many local optima which make them highlysuitable for benchmarking the performance of the metaheuristic algorithms in terms of optimization andconvergence exploration. The illustrated results are compared against those of PSO, GA, CLPSO, DE,and ABC metaheuristic algorithms on the same set of multi-dimensional numerical benmark functions.The properties and formulas of these functions are presented below.

    Table 3. The details of multimodal benchmark functions (D: dimensions).

    No. Function Formulation DSearch

    Range

    Global

    Optima

    F1 GriewankD∑

    i=1

    x2i4000

    −D∏

    i=1

    cos(

    xi√i

    )+ 1 10 [−600, 600] 0

    F2 RastriginD∑

    i=1

    (x2i − 10 cos (2πxi) + 10

    )10 [−5.12, 5.12] 0

    F3 Michalewicz2 −D∑

    i=1

    sin(xi) sin(

    ix2iπ

    )2010 [0, π] −1.8013

    F4 RosenbrockD∑

    i=1

    100(xi+1 − x2i

    )2+ (xi − 1)2 10 [−2.048, 2.048] 0

    F5 Ackley −20 exp(−0.2

    √1n

    D∑i=1

    x2i

    )− exp

    (1n

    D∑i=1

    cos (2πxi)

    )+ 20 + e 10 [−32, 32] 0

    F6 Schwefel 418.9829 × D −D∑

    i=1

    xi sin(|xi| 12

    )10 [−500, 500] 0

    F7 Weierstrass

    D∑i=1

    (k max∑k=0

    [ak cos

    (2πbk (xi + 0.5)

    )]) − D k max∑k=0

    [ak cos

    (2πbk (xi × 0.5)

    )]a = 0.5, b = 3, k max = 20

    10 [−0.5, 0.5] 0

    F8 Salomon − cos(

    √D∑i

    x2i

    )+ 0.1

    √D∑i

    x2i + 1 10 [−100, 100] 0

    Figure 7 presents the convergence characteristics in terms of the best fitness value of the medianrun of each algorithm for the test functions. Comparing the results and the convergence graphs, amongthese six algorithms, we observe that the proposed algorithm has a good global search ability andconverges fast. FBS achieves better results on all multimodal groups than the compared algorithms,and it surpasses all other algorithms apparently on functions 1, 2, 5, and 6, and especially significantlyimproves the results on functions 1 and 2. The other algorithms show poor performance on the complexproblems since they miss the global optimum basin to approach the optimal fitness. The Schwefel’sfunction is a good example, as it traps all other algorithms in local optima, while the FBS successfullyavoids falling into the deep local optimum which is far from the global optimum. On the complexmultimodal functions with randomly distributed local and global optima, FBS performs the best. Itshould also be noted that the FBS algorithm in the graphs exhibits superiority regarding the convergencespeed over the other five algorithms, and it converge to a global optimum solution with less fitnessevaluations and terminates after no more than 5000 iterations on functions 1, 3, and 4, and alwaysconverges faster than others on the remaining function problems.

  • 86 Dong, Zhang, and Zeng

    (a) (b) (c)

    (d) (e) (f)

    (g) (h)

    Figure 7. Convergence behavior of the FBS and other optimization algorithms on 10-D benchmarkingfunctions F1–F8. (a) F1: Griewank; (b) F2: Rastrigin; (c) F3: Michalewicz2; (d) F4: Rosenbrock; (e)F5: Ackley; (f) F6: Schwefel; (g) F7: Weierstrass; (h) F8: Salomon.

    These figures also prove that FBS not only improves the accuracy of the approximated optimuminitial, but also desirably enhances the convergence speed over the course of iterations that make itconverge faster than the other algorithms. Global random property and space region shortening fractionguarantees a satisfactory convergence speed. Other algorithms could not converge as fast as the FBS,since they have a large potential search space. The proposed FBS combines a global searching methodand local optimization strategy together to yield a balanced performance that achieves better fitnessand faster convergence. Besides, the convergence speed is a crucial parameter of real time applicationslike adaptive beamforming system, thus, the FBS is highly suitable and affordable for ABF.

  • Progress In Electromagnetics Research B, Vol. 88, 2020 87

    4.1.3. Minimization Result of the Tested Benchmark Functions

    In this subsection, experiments are conducted on a suite of multimodal functions illustrated in Table 3 toevaluate six optimization algorithms including the proposed FBS. All the test functions are minimized,and the relevant information can be found in [26, 27] for the standard benchmark functions, respectively.The number of iterations for conducting the experiments is 400. For the selected benchmarking problemsF1–F8, the dimension of these functions is set to 10. Every algorithm runs 1000 times independentlyto reduce the statistical error and achieve reliable results [28].

    The statistical results considering the average value and the standard deviation function fitnessvalue as well as the success rate (SR) needed to reach the acceptable solution are summarized inTable 4. For the results shown in Table 4, the smaller the mean value is, the better the performance of

    Table 4. The comparative and statistical results for benchmarking function problems F1–F8. (SR:success rate).

    Method F1.Griewank F2.RastriginMean Stev SR Time Mean Stev SR Time

    ABC 4.73E − 03 6.51E − 03 52% 88.20 s 1.43E − 02 4.59E − 03 30% 85.46 sDE 7.38E − 06 6.51E − 07 100% 86.51 s 2.55E − 03 7.18E − 04 55% 80.03 sGA 6.81E − 01 7.49E − 02 20% 79.36 s 5.29E − 01 1.21E − 02 11% 73.49 sPSO 5.27E − 03 3.45E − 03 49% 82.32 s 7.61E − 03 8.29E − 02 38% 77.25 sFBS 2.54E − 09 1.47E − 08 100% 68.12 s 9.53E − 05 2.17E − 06 86% 63.70 s

    CLPSO 6.01E − 05 4.32E − 06 100% 74.94 s 6.18E − 04 7.18E − 03 74% 69.25 sMethod F3.Michalewicz2 F4.Rosenbrock

    Mean Stev SR Time Mean Stev SR TimeABC −1.6325E + 00 1.56E − 03 32% 81.20 s 3.67E + 01 1.90E + 00 0% 80.30 sDE −1.7428E + 00 8.36E − 07 90% 78.53 s 8.37E − 01 3.27E − 01 8% 76.46 sGA −1.6715E + 00 4.83E − 04 44% 69.37 s 9.42E + 01 2.15E − 02 0% 68.04 sPSO −1.7002E + 00 3.95E − 03 84% 72.02 s 2.69E + 00 1.90E + 00 6% 71.84 sFBS −1.8013E + 00 7.48E − 06 100% 58.92 s 2.98E − 02 5.61E − 04 29% 60.43 s

    CLPSO −1.7332E + 00 6.28E − 05 90% 66.21 s 4.39E − 02 4.03E − 01 12% 65.44 sMethod

    F5.Ackley F6.SchwefelMean Stev SR Time Mean Stev SR Time

    ABC 8.87E − 03 2.98E − 02 47% 84.74 s 3.85E − 02 3.53E − 03 18% 79.03 sDE 9.27E − 07 9.69E − 07 100% 79.53 s 1.53E − 03 6.25E − 06 68% 74.36 sGA 1.46E − 02 5.32E − 03 29% 76.92 s 1.85E − 01 3.74E − 03 11% 70.37 sPSO 6.09E − 05 5.49E − 04 92% 78.27 s 5.87E − 03 3.68E − 04 66% 73.09 sFBS 4.34E − 07 5.74E − 08 100% 67.94 s 2.53E − 04 6.25E − 06 74% 65.57 s

    CLPSO 5.74E − 06 5.84E − 08 100% 72.82 s 6.53E − 03 7.43E − 03 59% 69.93 sMethod

    F7.Weierstrass F8.SalomonMean Stev SR Time Mean Stev SR Time

    ABC 6.13E − 05 9.09E − 06 100% 93.85 s 3.85E − 02 1.12E − 03 19% 99.72 sDE 1.08E − 08 4.17E − 09 100% 87.32 s 6.43E − 03 6.82E − 04 47% 93.73 sGA 4.28E − 03 9.64E − 06 82% 79.43 s 2.67E − 02 6.63E − 02 29% 87.94 sPSO 5.27E − 06 3.64E − 06 100% 84.38 s 5.74E − 04 2.63E − 03 68% 89.61 sFBS 6.77E − 09 7.54E − 11 100% 73.87 s 7.74E − 05 9.48E − 05 85% 79.36 s

    CLPSO 9.65E − 09 8.53E − 10 100% 75.38 s 6.24E − 05 9.48E − 04 92% 83.49 s

  • 88 Dong, Zhang, and Zeng

    algorithm is. The lower the standard deviation value is, the stronger the stability of algorithm is. Asseen, for most benchmark data sets, the average value and standard deviation calculated by the FBSare both smaller than those other algorithms, and the proposed algorithm surpasses all other algorithmson functions 1, 2, 3, 5, 6, and 7, and especially significantly improves the results on functions 1 and3. When the other algorithms find their own best fitness of these functions, the proposed FBS couldstill search better fitness closest to the optimal value. The CLPSO achieves similar results to the FBSon function 7, and they both are much better than the other variants on this problem. The DE alsoperforms well on multimodal problems. The DE performs similarly to the FBS on functions 1, 3, 5, and7. However, the FBS performs better on much more complex problems when the other algorithms missthe global optimum basin.

    As a result, in terms of performance in the global search ability and the optimization stability forbenchmarking function, the proposed FBS outperforms all other heuristics algorithms on the testedfunctions. In addition, this table illustrates that FBS in comparison with others displays higherpercentage and accuracy reaching the acceptable solutions on these test functions. For the meanreliability of test functions F1, F3, F5, F7, FBS exhibits the highest reliability with a 100% successrate and smallest average errors. This performance superior property is due to the FBS’s interactiveupdating rule. With the new updating rule and global randomness, different dimensions may learn fromdifferent examples based on the historically optimal search experience, and the FBS explores a largersearch space than the others. Because of this, the FBS performs comparably to or better than manymeta-heuristic algorithms on most of the multimodal problems experimented in this section.

    4.2. The Performance of the FBS Optimization Result in Adaptive Beamforming

    To demonstrate the benefits of the FBS optimization with application to adaptive beamforming, inthis part of the section, several groups of simulation experiments are conducted using Matlab R2017b.The desired GNSS signal is in the form of QPSK modulation mode with 0◦ for incident azimuth angle,and the power of GNSS signal is extremely weak on the ground and far less than noise; therefore, theselected SNR should not be too large in our simulation scenes. All the impinged signals are arrangedcoming from zeniths 45◦. Simulation environment is additive white Gaussian noise channel (AWGN).We have taken the structure size of the search element in weight vector to be five and the numberof search points per element in the first set 10 layers for Fibonacci branch increases according to thecorresponding Fibonacci sequence, and the sum of the element nodes in all the layer of search elementis 19. The maximum number of iterations in FBS is 400. The performance of the FBS-based ABFis evaluated from the following two simulation metric aspects: the beampattern performance of theadaptive antenna array and the steady state output signal-to-interference-plus-noise ratio (SINR) ofbeamformer system. The performance criteria of the given beampattern in Section 5.2 are the nullinglevel of interference, and the given criterion that we will use to compare the numerical results and makethe comparison of SINR is the numerical value.

    Four groups of simulation cases corresponding to the defined measure metrics aspects conductedin terms of various experimental critera are considered in this study. The effectiveness of the proposedFBS based beamformer is investigated by the power patterns formed by extracted weights in terms ofdifferent input SNRs in the first case. The second case considers the different numbers of array elementsfor system output SINR using FBS by the suitable weight vectors. The performance measurement onthe number of interference sources of output SINR is studied in the third case. The rest case concernsthe impact of different INRs of input interference on output SINR in front of a ULA system.

    The proposed FBS-based beamforming method is compared with the following five conventionalheuristics-based beamforming algorithms: 1) the differential evolution (DE) based beamformingmethod of [29], 2) the particle swarm optimization (PSO) based beamforming method of [30], 3) thecomprehensive learning particle swarm optimization (CLPSO) based beamforming method of [31], 4)the artificial bees colony (ABC) based beamforming method of [32], 5) Genetic Algorithm (GA) basedbeamforming method of [33]. Besides, in order to fully evaluate FBS-based beamformer comprehensivelyand accurately from all aspects, the performance of the classic closed-form method [12] for ABF has beenpresented and compared with the other algorithms specially and separately in corresponding tables. Atotal of 300 repetitions (independent trials) are implemented and then averaged to obtain each figureof the results.

  • Progress In Electromagnetics Research B, Vol. 88, 2020 89

    (a) (b) (c)

    Figure 8. Comparison of the beampattern performance synthesized by the heuristic algorithms withdifferent SNR. (a) SNR = −30 dB; (b) SNR = −10 dB; (c) SNR = 10 dB.

    Table 5. Average nulling levels of the FBS and the closed-form method.

    ������������MethodScenario SNR = −30 dB SNR = −10 dB SNR = 10 dB

    FBS −93.02 dB −112.95 dB −129.37 dBclosed-form −98.45 dB −121.67 dB −143.29 dB

    4.2.1. Input SNR Evaluation of the Beampattern

    We first examine the beampattern performance synthesized by the FBS and compared it to the othersgiven in terms of input SNR in this case. A uniform linear array with 6 omnidirectional antenna elementsis considered in the simulation. For investigating the effect of the input SNR with different levels, weconsider three sets of input SNRs with SNR = −30 dB, SNR = −10, and SNR = 10 to demonstrate thevalidity of our approach. Figure 8 shows the behavior of the beampatterns synthesized by the weightvectors determined by the optimization algorithms under different SNRs. It can be seen from the figuresthat the weight vectors found by FBS could synthesize an inerratic beampattern with deeper nulling(with nulling level exceeding −70 dB) towards the interference compared to the other algorithms. Theproposed FBS-based adaptive beamformer has suppressed the jammers in all cases while maintainingthe beampattern gain in the direction of the desired signal. The other algorithms are able to achievethe interference nulling performance for a higher SNR level, i.e., SNR = 10. As the level of SNRdecreases, the compared beamformers, especially ABC-based beamformer and GA-based beamformersuffer from performance degradation of the corresponding metaheuristic-based beampatterns, and thenulls do not align precisely with the interference sources. This indicates and demonstrates that theproposed algorithm is more stable and finds better solutions with greater precision in ABF application.To more clearly illustrate and comprehensively examine the nulling degrees of the proposed FBS, wecompare the results to the traditional classic closed-form method, and the separately independentaverage nulling levels of the FBS and the closed-form method are illustrated in Table 5. We can seefrom the table that compared to the sidelobe level of the closed-form method which performs the bestamong the compared methods in each case and is 46.79, 52.96, and 71.95 dB for the classic closed-form-based beamformer, either the objective function to be optimized or less computationally extensive,and more computationally stable routines are required or else, specific characteristics of a statisticaldecision-making problem are desired of; therefore, the superiority of computational complexity for FBSis evident relative to it. The specific experimental comparison results and computational complexityperformance advantages of the FBS algorithm are described in Section 4.3.

  • 90 Dong, Zhang, and Zeng

    (a) (b) (c)

    Figure 9. SINR performance versus SNR of the heuristic algorithms with varied array elements. (a)Six elements; (b) ten elements; (c) fourteen elements.

    4.2.2. Array Elements Assessment of Output SINR

    In this section, the output SINR performance is measured by the number of array elements in threevarious conditions in terms of the increasing input SNR value, and the SNR is assumed to be changedfrom −25 dB to 5 dB (centered at −10 dB) in 5 dB steps. The interference with 5 dB INR is consideredin the experiment. The linear arrays in different scenarios are considered to be composed respectivelyof 6, 10, and 14 elements. The results of the output SINR for heuristic-based beamformer and theclosed-form method are illustrated in Figure 9. From the graphs, it can be noted that closed-formmethod achieves the highest SINR value among all the algorithms. The proposed algorithm achievesbetter performance than the other optimization algorithms for all the array element scenarios and isable to achieve near optimal closed-form performance over the entire range of the input SNR values.CLPSO yields suboptimal higher values of SINR, but FBS yields optimal SINR values consistently inall cases. Moreover, we can also observe that the performance difference in reaching optimum weightvectors between the FBS and other algorithms is increased for the array element increment in eachalgorithm, which is consistent with our previous analysis owing to the increasing search dimension ofthe weight vectors solution in array element. With a view to the above fact, the global search capabilityof the proposed FBS algorithm could achieve the improvement of the output SINR more so than thecompared metaheuristic-based beamformer in adaptive beamformer system.

    4.2.3. Investigation of INR on the Output SINR

    A ULA consisting of 6 monochromatic isotropic elements with different INRs is considered in thisscenario. Three groups INRs of the interference are set as 5 dB, 20 dB, 35 dB which are established indifferent simulation scenarios. Figure 10 displays the SINR performance of these techniques versus theSNR under different power levels of interference sources by using the proposed and other algorithms.From the results depicted in Figure 10 we can know that in general, the optimization algorithms are allable to achieve near-satisfactory closed-form based beamformer SINR performance in the situation ofINR = 5 dB, which means the lowest power process of the interference signal. The proposed algorithmachieves improved performance in SINR compared to the other algorithms in all the simulations evenunder the most severe interference situations when the value of INR is 30 dB. With the increase ofINR for the interference, the SINR performances of all the algorithms are degraded, and the proposedalgorithms have evident advantages over these algorithms. Therefore, note that the proposed algorithmcan perform more robust results and suitable precisions in adaptive beamforming for high interferencepower levels.

  • Progress In Electromagnetics Research B, Vol. 88, 2020 91

    (a) (b) (c)

    Figure 10. SINR performance versus SNR of the heuristic algorithms with different INR. (a)INR = 5 dB; (b) INR = 20 dB; (c) INR = 35 dB.

    4.2.4. Examination of the Interferences Number Impact on Output SINR

    The simulations in this case are conducted to validate the effect of the interference quantity to theoutput SINR performance. The beamformer is equipped with 6 array elements, and the INR is fixed to10 dB for the received different numbers of interference signals. Table 6 illustrates the different numbersof interferences and the corresponding incident angle values of the above-mentioned interferences. TheSINR performance of the proposed FBS and other optimization algorithms for different numbers ofinterference signals is shown in Figure 11. As can be seen from Figure 11, closed form method performsthe best among all those in comparisons for its individual algorithm solving characteristics. With regardto the proposed FBS and other optimization comparisons, when there are one interference at the receiver,all the optimization algorithms can achieve the close-to-optimal SINR performance by considering therequirement for maximizing SINR, and FBS demonstrates the best improvement, followed by DE.ABC shows the lowest value of SINR. As the number of interferences increases from one to six, theperformance advantage of the FBS becomes more evident. The increase in the number of interferencesources increases the difficulty of the optimization problem. Failure of ABC, GA, and PSO to achievesufficiently high SINR clearly illustrates their limitations; therefore, the proposed FBS is more versatileand robust than the other optimization methods in ABF application.

    Table 6. Different number of interfereces for three scenarios.

    Scenario One Scenario Two Scenario ThreeInterference Incident Angle Interference Incident Angle Interference Incident Angle

    1 40

    1 401 402 −20

    2 −20 3 −454 −30

    3 −45 5 356 60

    4.3. Time Complexity of the ABF Significant Model to Achieve Optimum Performance

    In this part, the experiment results of the performed experiments between the FBS-based beamformerand the classic closed form method are presented in the form of the time complexity in finding the globaloptima in the ABF significant model. The percentage improvement of the proposed FBS in terms of

  • 92 Dong, Zhang, and Zeng

    (a) (b) (c)

    Figure 11. SINR performance versus SNR of the heuristic algorithms with different number ofinterferences. (a) One interference; (b) three interferences; (c) six interferences.

    nulling level is also illustrated in this part. The methods are applied to optimize a ULA consisting ofisotropic elements. Different cases with 14, 18, and 22 array elements are simulated to further validatethe proposed approach for real-world large-array with more elements applications. The number ofiterations for conducting the experiments is 200. Under the same computer hardware configuration inSection 4.1, the average CPU time consumption (in seconds) at key points is measured by the built-in‘Matlab Profiler’, which determines the computational complexity proportions.

    The optimization results considering the CPU time as well as the time consumption correspondingto computation load for the significant ABF model of the proposed FBS and the comparison algorithmsunder different numbers of array elements are listed in Table 7, Table 8, and Table 9.

    Table 7. Comparison results of the algorithms with 12 array elements.

    Algorithm CPU time (s) Percentage ImprovedFBS 17.26 171.26%

    Closed Form 46.82 Initial Comparison

    Table 8. Comparison results of the algorithms with 16 array elements.

    Algorithm CPU time (s) Percentage ImprovedFBS 20.68 233.70%

    Closed Form 69.01 Initial Comparison

    Table 9. Comparison results of the algorithms with 20 array elements.

    Algorithm CPU time (s) Percentage ImprovedFBS 32.74 278.53%

    Closed Form 123.93 Initial Comparison

    From the above tables, it can be observed that the proposed FBS which achieves the near optimalnulling level possesses a huge advantage in computational complexity compared to closed form methodin all cases. FBS requires far fewer evaluations and lower CPU time than closed form method consistentwith the computational costs, and FBS presents a more evident advantage over the algorithm in the

  • Progress In Electromagnetics Research B, Vol. 88, 2020 93

    large array element cases. CPU time cost of FBS in each case is 17.26 s, 20.68 s and 32.74 s, and theimprovements of FBS over the closed form are 171.26%, 233.70%, 278.53%. This indicates that FBSrequires much fewer function evaluations, and the closed-form approach needs to compute the matrixinverse process which introduces high computational complexity especially for large number of arrayelements, as shown in Table 8 and Table 9, since more array elements result in a larger dimensionalityof the weight vector solution, which requires a greater search time and number of computations tooptimize the beamformer. Although FBS achieves slightly less deep nulling level than the closed-form solving algorithm, the percentage improvement of the CPU time can compensate and providean acceptable performance for the proposed FBS in the experiment for certain real-time beamformingsituations. Overall, the FBS approach is quite competitive in complexity computation and the globalsearch optimization compared with the other methods.

    5. CONCLUSIONS

    In this paper, we present a novel heuristic algorithm, called Fibonacci branch search, for achievingthe improved performance of adaptive beamforming. The interactive global and local searching rulesare proposed to reduce the probability of falling into the local optima, and the global randomnesscharacteristic and space region shortening fraction guarantee the convergence velocity in globaloptimization process. In addition, we devise a specific implementation architecture based on FBSfor adaptive beamformer, and the amplitudes and phases of the weight vector acting as the solutionwere acquired in the search space by FBS. The beamforming results synthesized by the vectors arecompared with conventional metaheuristic-based beamforming algorithms.

    The simulation experiments in Section 4 demonstrate that the proposed FBS achieves increasedSINR over the suboptimally performing algorithm refereed to DE, CLPSO for different cases. Inaddition, the significant improvements over PSO, GA, ABC have been achieved in separate scenarios.With respect to computation complexity, FBS avoids the inverse calculation of the interferencecorrelation matrix compared to the classic closed form solving algorithm. Therefore, it is recognizedfrom academic and practical implications that the global optimization ability and fast convergencesearching of FBS-based beamformer would serve significant purpose for real-time spacial filteringelectronic countermeasures and large array elements of antenna design in military confrontation field.However, the limitation of FBS caused by the uncertainty of the frequent changing outliers is a majorissue that needs careful consideration, and the technique may have unacceptable nulling performancecorresponding to dynamic and various interference scenarios. As a future work, the FBS will be exploredto apply in a more complicated time-varying situation in ABF field.

    ACKNOWLEDGMENT

    This research was funded by the National Natural Science Foundation of China (Grant No. 61272333)and the National Key Laboratory of science and Technology Fund (Grant No. 9140C130502140C13068).

    REFERENCES

    1. Huang, X., L. Bai, I. Vinogradov, and E. Peers, “Adaptive beamforming for array signal processingin aeroacoustic measurements,” Journal of the Acoustical Society of America, Vol. 131, No. 3,2152–2161, 2012.

    2. Daneshmand, S., N. Sokhandan, M. Zaeriamirani, and G. Lachapelle, “Precise calibration of aGNSS antenna array for adaptive beamforming applications,” Sensors, Vol. 14, No. 6, 9669, 2014.

    3. Leight, J. and B. Toland, “Photonic beamforming technologies for advanced military andcommercial SATCOM antennas,” Aerospace Conference, 1999.

    4. Zhao, H., B. Lian, and J. Feng, “Space-time adaptive processing for GPS anti-jamming receiver,”Physics Procedia, Vol. 33, No. 1, 1060–1067, 2012.

  • 94 Dong, Zhang, and Zeng

    5. Synnevag, J. F., A. Austeng, and S. Holm, “Adaptive beamforming applied to medical ultrasoundimaging,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, Vol. 54, No. 8, 1606–1613, 2007.

    6. Khamy, S. E. E. and A. M. Gaballa, “Adaptive arrays for MC-CDMA using the MSINR guidedmultimodulus algorithm,” 2008 National Radio Science Conference, 2008.

    7. Chen, H. W. and J. W. Zhao, “Wideband MVDR beamforming for acoustic vector sensor lineararray,” IEE Proceedings — Radar, Sonar and Navigation, Vol. 151, No. 3, 158–162, 2004.

    8. Mu, P., L. Dan, and Q. Yin, “A robust MVDR beamforming based on covariance matrixreconstruction,” International Conference on Graphic & Image Processing, 2011.

    9. Sinha, P., A. D. George, and K. Kim, “Parallel algorithms for robust broadband MVDRbeamforming,” Journal of Computational Acoustics, Vol. 10, No. 1, 69–96, 2002.

    10. Shahbazpanahi, S., A. B. Gershman, and Z.-Q. Luo, “Robust adaptive beamforming using worst-case SINR optimization: A new diagonal loading-type solution for general-rank signal models,”Proceedings, IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003.

    11. Mozaffarzadeh, M., A. Mahloojifar, M. Orooji, K. Kratkiewicz, S. Adabi, and M. Nasiriavanaki,“Linear-array photoacoustic imaging using minimum variance-based delay multiply and sumadaptive beamforming algorithm,” Journal of Biomedical Optics, Vol. 23, No. 2, 026002, 2018.

    12. Shahab, S. N., A. R. Zainun, H. A. Ali, M. Hojabri, and H. Nurul, “MVDR algorithm basedlinear antenna array performance assessment for adaptive beamforming application,” Journal ofEngineering Science and Technology, Vol. 12, No. 5, 1366–1385, 2017.

    13. Wei, C. and Y. Lu, “Adaptive beamforming for arbitrary array by particle swarm optimization,”IEEE International Conference on Computational Electromagnetics, 2015.

    14. Vitale, M., G. Vesentini, N. N. Ahmad, and L. Hanzo, “Genetic algorithm assisted adaptivebeamforming,” Proceedings IEEE 56th Vehicular Technology Conference, 2002.

    15. He, L. and S. Huang, “Modified firefly algorithm based multilevel thresholding for color imagesegmentation,” Neurocomputing, Vol. 240, 152–174, 2017.

    16. Sun, K., S. Mou, J. Qiu, T. Wang, and H. Gao, “Adaptive fuzzy control for non-triangular structuralstochastic switched nonlinear systems with full state constraints,” IEEE Transactions on FuzzySystems, Vol. 27, No. 8, 1587–1601, 2018.

    17. Qiu, J., K. Sun, T. Wang, and H. Gao, “Observer-based fuzzy adaptive event-triggered controlfor pure-feedback nonlinear systems with prescribed performance,” IEEE Transactions on FuzzySystems, Vol. 27, No. 11, 2152–2162, 2019.

    18. Liao,, B. and S. C. Chan, “Adaptive beamforming for uniform linear arrays with unknown mutualcoupling,” IEEE Antennas and Wireless Propagation Letters, Vol. 11, 464–467, 2012.

    19. Etminaniesfahani, A., A. Ghanbarzadeh, and Z. Marashi, “Fibonacci indicator algorithm: A noveltool for complex optimization problems,” Engineering Applications of Artificial Intelligence, Vol. 74,1–9, 2018.

    20. Subasi, M., N. Yildirim, and B. Yildiz, “An improvement on Fibonacci search method inoptimization theory,” Applied Mathematics and Computation, Vol. 147, 893–901, 2004.

    21. Yildiz, B. and E. Karaduman, “On Fibonacci search method with k-Lucas numbers,” AppliedMathematics and Computation, Vol. 143, 523–531, 2003.

    22. Omolehin, J. O., M. A. Ibiejugba, A. E. Onachi, and D. J. Evans, “A Fibonacci Search techniquefor a class of multivariable functions and ODEs,” International Journal of Computer Mathematics,Vol. 82, 1505–1524, 2005.

    23. Ramaprabha, R., M. Balaji, and B. L. Mathur, “Maximum power point tracking of partially shadedsolar PV system using modified Fibonacci search method with fuzzy controller,” InternationalJournal of Electrical Power & Energy Systems, Vol. 43, 754–765, 2012.

    24. Wang, X., D. J. Lyu, Y. Dong, et al., “Cutting parameters multi-scheme optimization based onFibonacci tree optimization algorithm,” Control and Decision, Vol. 33, 1373–1381, 2018.

    25. Kaid Omar, O., F. Debbat, and A. Boudghene Stambouli, “Null steering beamformer using hybridalgorithm based on Honey Bees Mating Optimisation and Tabu Search in adaptive antenna array,”Progress In Electromagnetics Research C, Vol. 32, 65–80, 2012.

  • Progress In Electromagnetics Research B, Vol. 88, 2020 95

    26. Ng, C. K. and D. Li, “Test problem generator for unconstrained global optimization,” Computers& Operations Research, Vol. 51, No. 51, 338–349, 2014.

    27. Yang, Y. and Y. Shang, “A new filled function method for unconstrained global optimization,”Mathematical Problems in Engineering, Vol. 8, No. 1, 501–512, 2010.

    28. Saeed, S., H. C. Ong, and S. Sathasivam, “Self-adaptive single objective hybrid algorithm forunconstrained and constrained test functions: An application of optimization algorithm,” ArabianJournal for Science and Engineering, Vol. 44, No. 4, 3497–3513, 2019.

    29. Mallipeddi, R., J. P. Lie, P. N. Suganthan, S. G. Razul, and C. M. S. See, “A differential evolutionapproach for robust adaptive beamforming based on joint estimation of look direction and arraygeometry,” Progress In Electromagnetics Research, Vol. 119, 381–394, 2011.

    30. Banerjee, S. and V. V. Dwivedi, “Performance analysis of adaptive beamforming using particleswarm optimization,” 11th International Conference on Industrial and Information Systems(ICIIS), 242–246, IEEE, 2016.

    31. Ismaiel, A. M., E. Elsaidy, and Y. Albagory, “Performance improvement of high altitude platformusing concentric circular antenna array based on particle swarm optimization,” AEU-InternationalJournal of Electronics and Communications, Vol. 91, 85–90, 2018.

    32. Ruchi, R., A. Nandi, and B. Basu, “Design of beam forming network for time-modulated linear arraywith artificial bees colony algorithm,” International Journal of Numerical Modelling: ElectronicNetworks, Devices and Fields, Vol. 28, No. 5, 508–521, 2015.

    33. Yeo, B. K. and Y. Lu, “Adaptive array digital beamforming using complex-coded particle swarmoptimization-genetic algorithm,” Microwave Conference, 2006.