63
UNIVERSITY OF TRENTO DIPARTIMENTO DI INGEGNERIA E SCIENZA DELL’INFORMAZIONE 38123 Povo – Trento (Italy), Via Sommarive 14 http://www.disi.unitn.it EVOLUTIONARY OPTIMIZATION AS APPLIED TO INVERSE SCATTERING PROBLEMS P. Rocca, M. Benedetti, M. Donelli, D. Franceschini, and A. Massa December 2009 Technical Report # DISI-11-019

TOPICAL REVIEW: Evolutionary optimization as applied to inverse scattering problems

Embed Size (px)

Citation preview

UNIVERSITY OF TRENTO

DIPARTIMENTO DI INGEGNERIA E SCIENZA DELL’INFORMAZIONE

38123 Povo – Trento (Italy), Via Sommarive 14 http://www.disi.unitn.it EVOLUTIONARY OPTIMIZATION AS APPLIED TO INVERSE SCATTERING PROBLEMS P. Rocca, M. Benedetti, M. Donelli, D. Franceschini, and A. Massa December 2009 Technical Report # DISI-11-019

.

Evolutionary Optimization as Applied to InverseS attering ProblemsP. Ro a, M. Benedetti, M. Donelli, D. Fran es hini, and A.MassaDepartment of Information Engineering and Computer S ien e, ELEDIA Resear hGroup, University of Trento, Via Sommarive 14, 38050 Trento - Italy, Tel. +39 0461882057, Fax. +39 0461 882093E-mail: andrea.massaing.unitn.itAbstra t. This paper is aimed at presenting an overview of Evolutionary Algorithms(EAs) as applied to the solution of inverse s attering problems. The fo us ofthis work is on the use of dierent population-based optimization algorithms forthe re onstru tion of unknown obje ts embedded in an ina essible region whenilluminated by a set of mi rowaves. Starting from a general des ription of thestru ture of EAs, the lassi al sto hasti operators responsible for the evolutionpro ess are des ribed. The extension to hybrid implementations when integratedwith lo al sear h te hniques and the exploitation of the domain knowledge, eithera-priori obtained or olle ted during the optimization pro ess, are also presented.Some theoreti al dis ussions on erned with the onvergen e issues and a sensitivityanalysis on the parameters inuen ing the sto hasti pro ess are reported, as well.Su essively, a review on how various resear hers have applied or ustomized dierentevolutionary approa hes to inverse s attering problems is arried out ranging from theshape re onstru tion of perfe tly ondu ting obje ts to the dete tion of the diele tri properties of unknown s atterers up to appli ations to sub-surfa e or biomedi alimaging. Finally, open problems and envisaged developments are dis ussed.Key Words - Evolutionary algorithms, Inverse s attering.Classi ation Numbers (MSC) - 45Q05, 78A46, 78M50, 78M991. Introdu tionOptimization te hniques are generally lassied into deterministi and sto hasti methods. For example, the greedy, the steepest des ent, and the tree sear h algorithms

2 A. Massa et al.[111[119 belong to the former lass. Although ee tive in terms of onvergen e speed,these methods generally require a domain knowledge sin e in ase of non-linear andmulti-mimina fun tionals the initial trial solution must lie in the so- alled attra tionbasin of the global solution to avoid the onvergen e solution being trapped into lo alminima of the fun tional (i.e., wrong solutions of the problem at hand). On the ontrary,sto hasti algorithms [59[144[136 are global sear h approa hes potentially able to ndthe global optimum of the fun tional whatever the initial point/s of the sear h.The goal of optimization is the knowledge of the global solution. The solution isfully des ribed when its des riptors (i.e., its des riptive features), whi h quantify theinformation ontent of the solution itself, are dened. This an be mathemati ally doneby determining the problem unknowns (i.e., the oded representation of the solutiondes riptors) through the optimization of a suitable ost fun tion. It should be observedthat, the number of unknowns is dierent in ea h problem and proportional to theinformation ontent of the solution.Sin e on one hand the des riptors are dierent (e.g., dis rete/ ontinuous variables)as well as the number of unknowns to be determined an vary among the optimizationproblems, the hoi e of a proper optimization algorithm is a key issue and a generalrule for this hoi e does not exist. From a pra ti al point of view, the main featuresne essary to an optimization algorithm are the ability to deal with omplex fun tionalsor ost fun tions, the simpli ity of use, a limited number of ontrol parameters, good onvergen e properties, and the exploitation of the parallelism oered by modern PC lusters. In this sense, Evolutionary Algorithms (EAs) seem to be good andidates.They have been applied to a huge variety of problems in dierent and very heterogeneouselds ranging from engineering to e onomi s, up to business and natural s ien e.For example, in biomedi al and natural s ien e, several resear hes are on ernedwith the use of evolutionary algorithms for the predi tion of protein stru tures [84and the design of drugs [91. In the framework of engineering, they have beenapplied to the design of air rafts [19, the synthesis of ele tromagneti systems [161and mi rowave devi es [160. As far as ombinatorial optimization is on erned,routing [135, assignment [45, and s heduling [102 problems have been dealt with,as well. Although the number of works published in e onomy and business is limited,evolutionary algorithms have demonstrated to work ee tively as shown in [74 and [70.This work is aimed at dis ussing the use of evolutionary algorithms on a lassof problems in ele tromagneti engineering, namely the inverse s attering problems.

Evolutionary Optimization as Applied to Inverse S attering Problems 3The rst population based algorithm applied to this topi was the Geneti Algorithm(GA) [39. Several versions of GAs have been implemented and ee tively used inele tromagneti inversion [100[24[174[79. In order to ope with the drawba ks ofGAs, dierent kinds of evolutionary algorithms has been su essively developed. Amongthem, let us re all the dierential evolution (DE) algorithm [130[97 and the parti leswarm optimizer (PSO) [49[81. More re ently, the ant olony optimizer (ACO) hasbeen also applied [118. Besides bare te hniques, a non-negligible number of hybridapproa hes has been implemented to improve the onvergen e rate of global optimizers.The paper is organized as follows. In Se tion 2, an introdu tion on the genesis ofnature-inspired optimization algorithms and some motivations on their use and e ien ywhen dealing with real world problems are given. A general des ription of the stru tureof EAs is presented in Se t. 3, while dierent implementations are detailed in Se t. 4.Se tion 5 is devoted to some theoreti al dis ussions on the onvergen e properties aswell as on EAs sensitivity to the values of the ontrol parameters. The inverse s atteringproblems is briey formulated in Se t. 6 and an overview on the appli ation of EAs isprovided. Some on lusions on the role of EAs in inverse s attering are drawn in Se t.7, whereas open problems and possible future developments are dis ussed in Se t. 8.2. The Origin of EAs - Adaptation in Arti ial SystemsIn early 1970s [75, Holland showed that nature-inspired evolutionary algorithms anbe adopted as suitable learning or sear hing pro edures for the solution of arti ialproblems. The rst example of an algorithmmodelling natural systems was the so- alledGeneti Algorithm. The algorithm was based on the on epts of natural sele tion andgeneti pressure. Its implementation was inspired by the studies of Darwin and Mendelon the higher possibility for an individual or agent that better t the surroundingenvironment to generate ospring and preserve its geneti features throughout su essivegenerations. The su ess obtained by this optimization approa h was immediate and itre eived a wide and rapid diusion. Dierent versions of the original binary GA weredeveloped for the optimization of fun tionals [47 and they have been applied to real-world appli ations. The rst text book on GAs and related appli ations was publishedby Goldberg [69 in 1989.As ompared to previous optimization algorithms, the GA showed many interestingfeatures. More spe i ally, (a) the GA does not require neither the analyti al knowledge

4 A. Massa et al.nor the dierentiation of the fun tions to be optimized, but only the values of the tnessare enough to pursue the evolutionary pro ess, (b) the algorithm tends to move towardsthe most attra tive region of the solution spa e by means of an almost blind sear hte hnique sin e the operators are applied in a probabilisti way instead of onsideringdenite rules, ( ) sampling the sear h spa e not in a single point but in several lo ationsat ea h iteration and the way the operators re ombine the information oded in thepopulation of solutions foster the global sear h apability of the optimization.Besides the expli it parallelism guaranteed by its multiple-agent nature as for otherpopulation-based sto hasti algorithms su essively developed, the GAs is also relatedto the on ept of s hemata (i.e., the building blo ks oding ea h trial solution) and theimpli it parallelism. In [75 and [69, it has been shown that the ee tive numberof s hemata [69 pro essed by the GA at ea h iteration of the evolutionary pro ess isgreater than the number of individuals P of the population. Su h a property guaranteesthat, also in a serial implementation, multiple hara teristi s (i.e., the orrespondings hemata) of the solution are pro essed in parallel. A well-known result is the Holland'sinequality stating a lower bound on the order of P 3/ǫ√l to the number of s hematapro essed in a population of P = ξ2l strings, ξ and l being a small integer [17 and thestring length of binary digits, respe tively. This result has been generalized in [11 for apopulation of P = 2βl individuals by proving that the s hemata bound is a monotoni allyde reasing fun tion of β and that when β > 4/3, the expe ted number of pro esseds hemata is a onstant and its lower bound is proportional to P (2log23)/β/

√log2P .After the diusion of GA-based algorithms, the development of arti ial systemsbased on the on epts of swarm intelligen e has been more re ently onsidered [16 andnew implementations of innovative metaheuristi s exploiting the ooperation paradigm,instead of the ompetitive one of the GAs, have been proposed. In this framework,the Parti le Swarm Optimizer [87 and the Ant Colony Optimizer [54 have beensu essfully applied to an in reasing number of problems and appli ations. Thesealgorithms arti ially model the so ial intera tion and ooperation of swarm of beesor olony of ants. A ordingly, the a tivity of ea h agent is guided not only by thework in progress (namely, its tness to the environment) but also taking into a ountthe information oming from the intera tions with other agents or present in the lo alenvironment (i.e., the stigmergy ‡ ).

‡ The on ept of stigmergy, whose meaning and impli ations will be better spe ied in the following,is widespread and quite important in arti ial intelligen e. As a matter of fa t, it is related to the self-

Evolutionary Optimization as Applied to Inverse S attering Problems 53. Evolutionary Algorithms - General FrameworkEAs are iterative pro edures, where a pool of P solutions, F =

f (p); p = 1, ..., P

,evolves to nd the solution of the problem at hand through the optimization of a suitablefun tion Φ(f (p)) or fun tions Φt

(f (p)), t = 1, ..., T , (multi-obje tive optimization [43)aimed at measuring the goodness of the trial solution under given onstraints. The ost fun tion is the unique link between the optimization problem and the physi al oneand great attention should be paid to the dene Φ in order to obtain representativeand reliable solutions at the end of the optimization pro ess. Moreover, the omplexityof the ost fun tion as well as the omputational burden of its minimization stronglyinuen e the use of a lass of optimization algorithms rather than others.As far as the the design of an EA-based optimization te hnique is on erned, thekey points to be addressed are:

• the representation of the solution, f = fn; n = 1, ..., N, oding a set of Nparameters fn, n = 1, ..., N , to be optimized;• the design of the evolutionary operators, L, for generating the su ession (ideallyinnite) of trial solutions, f

k, k = 0, ...,∞, k being the iteration index;

• the evolution pro edure, namely the riteria and guidelines to generate newsolutions by means of the evolutionary operators.At the initialization of the iterative pro ess, the initial set of solutions F0 =f (p)

0; p = 1, ..., P

is usually randomly-generated within the sear h spa efn = rfmaxn + (1 − r) fminn (1)starting from the knowledge of the upper fmaxn and lower fminn bounds of the parameter

fn that limit the admissible sear h spa e Ω (i.e., f ∈ Ω with fn ∈ [fminn , fmaxn ]).Moreover, r ∈ [0, 1] is a uniformly-distributed random variable. Otherwise, the initialpopulation an be dened on the basis of some a-priori information on the problem athand and its solutions. In su h a ase, the solutions are statisti ally-generated arounda referen e trial solution f =fn; n = 1, ..., N

by onsidering either a uniformfn = fn

(2r − 1

2

) (2)organization pro ess [16. In the framework of optimization, it means that the surrounding ar hite ture(i.e., the environment/solution spa e) provides a su ient amount of information and onstraints to ontrol the low level a tions (i.e., those of the single agents) su h that the general a tivity of the entireswarm/ olony seems being governed by a global plan. The notion has been rst introdu ed by Grasséin 1959 about the termites' behavior [71 and an interesting review on the subje t an be found in [155.

6 A. Massa et al.or a normal distributionG (fn) =

1√2πςn

exp−(fn − fn√

2ςn

)2 (3)ςn being a real index ontrolling the statisti al distribution of the parameter values.Su essively (k ≥ 1), a sequen e of trial solutions is generated by applying theoperators L in a sto hasti fashion and a ording to the adopted evolutionary pro edure.The pool of solutions at the (k + 1)-th iteration, Fk+1, is given by

f (p)

k+1= f (p)

k+ s

(p)k+1 , p = 1, ..., P (4)where s(p)

k+1 is dened on the basis of the solutions Fk at the previous iteration throughthe appli ation of the evolutionary operatorss(p)k+1 = LFk . (5)A ordingly, a trial solution at iteration k + 1 turns out to be expressedf (p)

k+1= f (p)

0+

k∑

j=0

LFj (6)whereLFk = L

f (p)

k; p = 1, ..., P

= Lf (p)

k−1+ s

(p)k ; p = 1, ..., P

= Lf (p)

k−1+ LFk−1 ; p = 1, ..., P

. (7)The stru ture of the EAs is then fully des ribed by detailing the followingar hite ture levels, namely the Basi level and the Control level.3.1. Basi LevelThe basi level is responsible for the generation of the su ession of trial solutions andit is on erned with the oding of the solutions and the design of the evolutionaryoperators.The oding of the problem unknowns, fn, n = 1, ..., N , through a set of symbolsbelonging to an alphabet A is a key point of the EAs sin e it for es the hoi e of theevolutionary operators as well as the granularity of the optimization and the a ura yof the nal solution. The most popular oding strategies, widely used in several pra ti e

Evolutionary Optimization as Applied to Inverse S attering Problems 7appli ations, are the binary oding, A = 0, 1, and the real oding, A ≡ R. Theeasy implementation of the former in personal omputers and the fa t that manyproblems deal with ontinuous real-valued variables have ontributed to the proliferationof EAs with these oding strategies as well as the design of ustomized real/binaryevolutionary operators. On the other hand, the use of a dis rete alphabet of S symbols,A = a1, ..., aS, is ommon in ombinatorial optimization.Generally speaking and whatever the alphabet, a oding law Γ (·) is used to mapthe set of parameters, f = f1, ..., fN, from the input spa e ( alled phenotype spa e)to its oded representation, c = c1, ..., cM, in the work spa e ( alled genotype spa e):c = Γ

(f), M ≥ N . Although the terms phenotype and genotype ome from geneti sand were rst introdu ed by Holland [75 in dealing with arti ial adaptive systems,their meaning is more general and it is not limited to the framework of geneti -basedoptimization algorithms. As regards to the oding fun tion, it an be dened betweenequal-dimensional spa es (i.e., M = N) or to a higher dimensionality (i.e., M > N).On e a new set of oded solutions is determined in the genotype spa e by using theevolutionary operators, a de oding law is applied to map the updated oded parametersinto a new trial solution within the phenotype spa e: f = Γ−1 (c).Con erning the evolutionary operators, they are usually inspired by naturalparadigms. Representative examples are those modeled on the on epts of naturalsele tion (GAs and DE), ooperation and stigmergy taken from the intelligen e ofswarms (e.g., PSO and ACO), and distribution of knowledge [e.g., Memeti Algorithm(MA).3.2. Control LevelThe ontrol level is the ar hite tural stru ture devoted to exploit the building blo ksof the basi level in sampling the solution spa e to nd the global optimum. Atthis level, the issues related to the setup of the ontrol parameters, the denitionof the termination onditions, and the introdu tion of the problem onstraints [e.g.,

hi

(f (p)

k

)= 0, i = 1, ..., I, or gj (f (p)

k

)≤ 0, j = 1, ..., J or boundary onditions (e.g.,

fn ∈ [fminn , fmaxn ] su h that f ∈ Ω) on the solutions are properly addressed. Morespe i ally, the ontrol parameters dene the number of agents or dimension of thepopulation/swarm of trial solutions, Pk, used at ea h iteration and the probabilities ofapplying the evolutionary operators L. As regards to the onvergen e riteria, simplertermination onditions are based on heuristi assumptions and user-dened thresholds

8 A. Massa et al.on the value of the fun tion to optimize or on a maximum amount of iterations, K[69[151[87[54. More sophisti ated hoi es take into a ount the stationariness ofthe optimal ost fun tion value, Φoptk =

minp=1,...,P

[Φ(f (p)

k

)], in a xed range ofiterations, Kwindow,∣∣∣KwindowΦoptk−1 −

∑Kwindow

i=1 Φopti

∣∣∣Φoptk

≤ η (8)η being a numeri al threshold. Furthermore, onditions quantifying the diversity ofthe solutions of the population are also used [5.The boundary onditions are usually related to the physi al admissibility of thesolution and derive from the a-priori information on the a tual solution. Su h aninformation allows one to redu e the dimension of the sear h spa e and is of fundamentalimportan e for the (fast) onvergen e towards the global optimum.3.3. Single vs. Multiple Obje tive OptimizationIn single-obje tive problems (SOP s), the optimization is aimed at looking for theminimum (or maximum) of a s alar fun tion Φ

(f)

: Ω ⊆ RN → R subje t to some onstraints. The solution minimizing the obje tive fun tion is alled global minimum

f opt = argminf

[Φ(f)]. The su ient ondition for a point of the solution spa e,

f ∈ Ω, to be the global minimum on Ω is thatΦ(f opt

)≤ Φ

(f), ∀f ∈ Ω . (9)Dual onsiderations hold true for maximization problems.Dierently, several problems are mathemati ally des ribed in terms of ve torial ostfun tion, Φ

(f)

: Ω ⊆ RN → R

T ,Φ(f)

=[Φ1

(f), Φ2

(f), ..., ΦT

(f)] (10)where ea h s alar fun tion Φt

(f) models a dierent obje tive or performan e riterionusually oni ting with the others. This is the ase of multi-obje tive problems(MOP s) or ve tor optimization problems [43[142[143[175 dealing with multi riteriaoptimization. Unlike SOP s, the meaning of optimum modies into the best trade-osolution among the whole set of performan e riteria. The notion of Pareto optimality[113 is generally adopted to properly model this on ept. A solution f is Pareto-optimal

Evolutionary Optimization as Applied to Inverse S attering Problems 9on Ω if no other solutions exist that dominate it. A solution f (a) (stri tly) dominatesf (b) if and only if

Φt

(f (a))≤ Φt

(f (b)), t = 1, ..., T (11)and

∃t ∈ [1, T ] : Φt

(f (a))< Φt

(f (b)). (12)As a onsequen e, (a) Pareto optimal solutions annot redu e their performan es ona riterion Φt′ without in reasing their ee tiveness in tting at least another riterion

Φt′′ ; (b) the solution of a MOP is not unique, but all Pareto optimal solutions aresuitable solutions; ( ) the solutions on Ω non-stri tly dominated generate the so- alled Pareto front. Sin e, no general rules exist for the hoi e of the best solutionin MOP s, the global optimum is hosen either a ording to the user-requirements orby reformulating the MOP into an equivalent SOP whose s alar ost fun tion is thelinear ombination of the MOP obje tive fun tionsΦ(f)

=T∑

t=1

wtΦt

(f) (13)

wt, t = 1, ..., T , being real user-dened oe ients. As far as solution algorithms forthe MOP s are on erned, although many mathemati al programming pro edures havebeen designed for the retrieval of the solutions of the Pareto front [92, EAs seems tobe very suitable to MOP s be ause of their intrinsi /impli it parallelism that allows tosimultaneously manage a set of dierent solutions [43 and to nd multiple solutionsat ea h iteration. Moreover, EAs an easily address optimization problems whosePareto fronts are either dis ontinuous or on ave while the sear hing apabilities ofother optimizers turn out to be more dependent on the nature of the Pareto front.4. Evolutionary Algorithms - ImplementationsIn this se tion, a brief overview on EAs usually (to the best of the authors' knowledge)applied to the solution of inverse s attering problems is reported. The se tion issubdivided in three main parts. The rst one is devoted to des ribe geneti -basedoptimization algorithms. Standard implementation of GAs and DE are presentedpointing out the main dieren es and ommon features. Unlike GAs and DE whoseunderlying ar hite ture models a ompetitive and hierar hi al framework aimed at

10 A. Massa et al.promoting the reprodu tion/evolution of ttest individuals, de entralized optimizationpro edures based on the intelligen e of swarms, namely the parti le swarm optimizerand the ant olony optimizer, are onsidered in the se ond part. Finally, some state-of-the-art hybrid algorithms are briey summarized.4.1. Geneti -based Optimization4.1.1. Geneti Algorithms - GAs are EAs modeled on on epts of natural sele tionand geneti pressure to perform an ee tive sampling of the solution spa e. GAs basi prin iples were rst introdu ed by Holland in 1975 [75 and extended to fun tionaloptimization by De Jong [47 with an immediate diusion to real-world problems be auseof their ee tiveness in dealing with omplex fun tions [69[77[61 when ompared tostandard deterministi pro edures.The solutions at the k-th iteration and belonging to the phenotype spa e, f (p)

k,

p = 1, ..., P , are alled individuals, while their orresponding version in the genotypespa e are denoted as hromosomes, c(p)k , p = 1, ..., P . At ea h iteration or generation,the set Fk =f (p)

k; p = 1, ..., P

of P agents or individuals ompose a population of trialsolutions named parents. The set of geneti operators LGA is applied to Fk to generatea new population Fk+1. More spe i ally, the sele tion, S, the rossover, C, and themutation, M, a t on the parents to determine the individuals of the new population, alled hildren or ospring.In their basi version, the GAs follow the workow in Fig. 1. After the initializationof the population F0 at the rst iteration (k = 0), new populations, Fk, k ≥ 1, areiteratively generated by applying the geneti operators LGA = S, C, M as follows.For ea h iteration, a mating pool is hosen by applying the sele tion pro edure to FkFk(S) = S Fk . (14)Standard implementations onsider the roulette-wheel sele tion or the tournamentsele tion [69. The sele tion pro edure performs taking into a ount the knowledge onhow urrent individuals t the problem at hand. Mathemati ally, su h a knowledgeis a quired by omputing the ost fun tion values of the urrent population, Φ

(p)k =

Φ(f (p)

k

), p = 1, ..., P . Fittest individuals have higher probability to be hosen as parentsfor generating new individuals and for reprodu ing their hromosomes [75. Dealing withminimization problems and a ording to a tness proportional sele tion me hanism, the

Evolutionary Optimization as Applied to Inverse S attering Problems 11probability of a parent to be hosen for the mating pool is equal toλ

(p)k(S) =

1/Φ(p)k∑P

i=1 1/Φ(i)k

. (15)A new population is then generated by applying rossover and mutation a ordingto the values of the probabilisti oe ients dened at the ontrol levelFk+1 = Fk(C) ∪ Fk(M) (16)where Fk(C) and Fk(M) indi ate the set of new individuals obtained by rossoverFk(C) = C

Fk(S)

(17)and mutationFk(M) = M

Fk(S)

, (18)respe tively. The geneti operators are iteratively applied on the mating pool untilthe population is ompleted and the parents, whom neither rossover nor mutation areapplied to, are dire tly reprodu ed in the next population. To enhan e the onvergen ebehavior of GAs, another operator known as elitism is often used. The elitist strategyis applied whether the ondition Φoptk+1 > Φopt

k (in minimization problems) holds trueand it onsists in inserting the best individual of the k-th iteration in pla e of the worstsolution of the su essive iteration.To further improve the onvergen e as well as the global apability of GAs, besidesthe ommonly-used geneti operators, the GAs have been also modied by usingenhan ed te hniques like dominan e and diploidy, sharing, or knowledge-based operators[69.(A) Binary GAsGeneti Algorithms were rstly implemented to work with binary or dis rete unknowns.The problem unknowns are oded, if not already binary, in strings of l =∑N

n=1Qn bits,Qn is the number of levels used to quantize the range of existen e of the n-th unknownparameter. Both uniform [90

fn = fminn +

[fmaxn − fminn

2Qn − 1

]Qn−1∑

i=0

ai(n)2i (19)or non-uniform quantization [76

fn =

Qn∑

i=1

ai(n)21−iχn (20)

12 A. Massa et al. an be used. More in detail, where ai(n), i = 1, ..., Qn, is the set of bits (or alleles) omposing the oded parameter, cn =a1(n), ..., aQn(n)

, and χn = fmaxn

2is the largestquantization level.Geneti operators LGA a t on the hromosomes c(p), p = 1, ..., P as follows. Inthe sele tion phase, a pair of parents c(p1)k and c

(p2)k is hosen. The re ombinationof the strings of genes is then performed through rossover with probability pC .By onsidering the single-point rossover, an integer value i ∈ [1 : l] is randomly hosen and two hildren are generated whose hromosomes turn out being equal to

c(p1)k+1 =

a

(p1)1,k , ...a

(p1)i,k , a

(p2)i+1,k, ..., a

(p2)B,k

and c(p2)k+1 =

a

(p2)1,k , ...a

(p2)i,k , a

(p1)i+1,k, ..., a

(p1)B,k

. Ea h hild ontains parts of the geneti stru ture of both parents. Moreover, an individual ismutated with probability pM by randomly ipping the value from 1 to 0 or vi eversa ofsome alleles of the orresponding hromosome, pMB being the bit mutation probability.Obviously, more omplex implementations of the rossover operator (e.g., two-point rossover, uniform rossover [161) and the mutation (e.g., inter hange [90) exist, aswell.Sin e the binary GA (BGA) works with a nite dimension parameter spa e, it turnsout to be more adapt to deal with problems where the unknowns an assume only anite number of values. Con erning real ( ontinuous) variables, unknown parametersneed to be quantized with an unavoidable quantization error. This error an be redu edby in reasing the gene length l at the ost of a de rease in the onvergen e speed and anin rease in the memory requirements. Moreover, the GA operators a ting on a binary- oded representation of the solution do not assure that the hromosomes of the nextgeneration are admissible solutions. Moreover, if a eptable solutions have to belong tosome domains of the solution spa e (e.g., when onstraints are imposed), monitoring thisproperty under the a tion of the geneti operators an be laborious and time- onsuming(a de oding should be performed), and the onvergen e may therefore be slowed.(B) Real oded GAsBinary en oding is not the only way to represent a parameter when applying GAs. Inthe presen e of real parameters, it is more logi al to use the oating point representation[89. For the real- oded GA (RGA), a gene is represented by the value of the unknownitselfc(p)k = f (p)

k. (21)

Evolutionary Optimization as Applied to Inverse S attering Problems 13As a onsequen e, new geneti operators are designed although the pe uliaritiesof the original operators should be maintained. Mutation and rossover must remaina mean to explore the parameter spa e randomizing sele ted solutions and a way to(randomly) mix the good hara teristi s of the hromosomes, respe tively.In RGAs, the rossover is dened as the arithmeti al linear ombination of two hromosomes. On e two parents f (p1)

kand f (p2)

kare sele ted, the resulting ospring aregiven by

f(p1)n,k+1 = rf

(p1)n,k + (1 − r) f

(p2)n,k

f(p2)n,k+1 = (1 − r) f

(p1)n,k + rf

(p2)n,k , n = 1, ..., N. (22)The RGAmutation onsists in adding a random value sn ∈

f

(p)n,k − fminn , fmaxn − f

(p)n,k

to a randomly-sele ted pth hromosomef

(p)n,k+1 = f

(p)n,k + sn . (23)Whether the new trial solutions are not physi al and do not belong to the solutionspa e (f /∈ Ω), they are modied exploiting the a-priori knowledge on the boundariesof the solution spa e as follows

f(p)n,k+1 =

fminn if f

(p)n,k < fminn

fmaxn if f(p)n,k > fmaxn

. (24)The RGA gained in reasing popularity be ause it is easy to implement, omputationally e ient when dealing with a small number of real-valued unknowns,and suitable for ne-tuning the sele tive pressure [4.(C) Hybrid oded GAsIn some problems, the a-priori knowledge on the solution allows a parametrization of asubset of the unknowns through a small set of dis rete des riptorsfd = Hdj; j = 1, ..., J , d ∈ [1, L] (25)where dj ; j = 1, ..., J is the sub-set of dis rete equivalent parameters being J < L <

N . In su h a ase, a suitable en oding pro edure must be dened in order to provide aone-to-one mapping between the phenotype spa e and the genotype spa e, but at thesame time, exploiting the features of the unknown parameter set. A hybrid oded GA

14 A. Massa et al.(HGA) is des ribed in [22 to deal with mi rowave imaging problems. A set of integer-valued equivalent parameters, dj; j = 1, ..., J, is binary oded and a oating-pointrepresentation is used for the remaining real unknowns, fn; n = L+ 1, ..., N. As faras the geneti operators are on erned, they have been properly modied to maintainthe stru ture of the hybrid hromosomes. During mutation, if the gene to be perturbedis binary- oded, it is hanged from 0 to 1 or vi eversa as for the BGA. Otherwise, themutation (23) dened for the RGA is onsidered.Con erning the e ien y and ee tiveness of HGA-based strategies, theexploitation of some a-priori information to dene a suitable parametrization fun tionH and the hoi e of a redu ed set, instead of the whole number, of representativeparameters is of fundamental importan e to redu e the dimension of the sear h spa e.Moreover, the parametrization method an be protably adopted to prevent thegeneration of solutions physi ally not admissible.4.1.2. Dierential Evolution - Unlike GAs, the Dierential evolution algorithm hasbeen originally proposed by Storn and Pri e [150 for the global optimization over ontinuous spa es. They were mainly aimed at simplifying the evolution pro ess ofGAs as well as to enhan e the onvergen e rate [151[121. The iterative evolution ofthe DE is similar to that of the GAs. Ea h urrent population is repla ed by betterindividuals obtained by applying the DE operators, LDE, still based on geneti but nowexe uted in a dierent sequen e: the mutation, M, the rossover, C, and sele tion, S.The DE iteratively evolves as shown in Fig. 2. During the mutation pro ess, anintermediate solution is generated in orresponden e with ea h individual f (p)

kas follows

t(p)k+1 = f (p1)

k+ ε

(f (p2)

k− f (p3)

k

), p = 1, ..., P (26)where p, p1, p2, p3 ∈ [1, P ] (p 6= p1 6= p2 6= p3) are the indexes of dierent individualsrandomly hosen in Fk. The agents f (p1)

k, f (p2)

k, and f (p3)

kare alled donor ve torsor se ondary parents, and 0 < ε ≤ 2 is a real and onstant value that ontrols theampli ation of the dierential variation (f (p2)

k− f (p3)

k

). The rossover is then appliedbetween the intermediate solution, t(p)k+1, alled mutant ve tor and the primary parent,f (p)

k, a ording to the following strategy

u(p)k+1 =

t(p)k+1 if (r < pC)

f (p)

kotherwise

. (27)

Evolutionary Optimization as Applied to Inverse S attering Problems 15Finally, the sele tion takes pla e and f (p)

k+1is hosen a ording to a greedy riterionby omparing Φ

(f (p)

k

) with Φ(u

(p)k+1

). In a minimization problem, f (p)

k+1= u

(p)k+1 when

Φ(u

(p)k+1

)≤ Φ

(f (p)

k

), while f (p)

k+1= f (p)

kotherwise.As regards to the ontrol parameters of DE, they are the rossover probability

pC and the ampli ation oe ient ε to be arefully hosen to avoid a premature onvergen e to sub-optimal solutions or a slow onvergen e rate [151.As ompared to GAs, the main dieren es are (a) the order of exe ution of thegeneti operators (Fig. 2) and (b) the ompetition between parents and hildren duringthe sele tion phase whi h la ks in GAs sin e the ospring are all a epted while theparents are all dis arded. Unlike GAs, the ttest parents have higher probability togenerate hildren with better tness. Moreover, the risk that the average tness of thepopulation an get worse is greater in GAs sin e rossover and mutation are performedafter sele tion. Furthermore, sin e the se ondary parents are hosen from the populationwith equal probability (and not through a proportional tness sele tion), theDE usuallyin reases its global sear hing apabilities. Finally, the ost fun tion of the best individualΦoptk , k = 1, ..., K, monotoni ally de reases in theDE be ause of theDE implementationof the sele tion me hanism and without the need of parti ular elitist strategy.Despite the basi version of the DE, many dierent versions of the algorithm exist[1[121[33. To identify them the notation DE/x/y/z is generally adopted [151. Morein detail,

x is the mutated solution randomly hosen (x = rand) or set to the bestindividual within the population (x = best);y is equal to the number of dieren e ve tors used in the dierential variation;z indi ates the rossover s heme.A ording to su h a notation, the version of the DE presented above is identied asDE/rand/1/bin. For ompleteness, let us noti e that the mutation operator of theversion DE/best/2/bin is dened as

t(p)k+1 = f opt

k+ ε

(f (p1)

k+ f (p2)

k− f (p3)

k− f (p4)

k

). (28)4.2. Optimization by Swarm Intelligen e4.2.1. Parti le Swarm Optimizer - The parti le swarm optimizer is a robust sto hasti sear h pro edure suitable for the optimization of ontiguous unknowns inspired by the

16 A. Massa et al.so ial behavior of inse t swarms, s hool of shes and o ks of birds. In the PSO, anagent, b(p)k , alled parti le is hara terized by a position f (p)

kin the solution spa e and avelo ity v(p)

k that models the apability of the p-th parti le to y from the urrent positionto another su essive position f (p)

k+1. The whole set of parti les b(p)k , p = 1, ..., P

, onstitutes the swarm Fk. In its lassi al implementation [85, the parti le updateequations aref (p)

k+1= f (p)

k+ v

(p)k+1 (29)and

v(p)n,k+1 = ωv

(p)n,k + C1r1

(p

(p)n,k − f

(p)n,k

)+ C2r2

(gn,k − f

(p)n,k

) (30)whose physi al interpretation, derived by Newton's laws, has been given in [107.In (30), ω, C1, and C2 are ontrol parameters known as inertial weight, ognitiveand so ial a eleration terms, respe tively [87. Moreover, r1 and r2 are tworandom variables having uniform distribution in [0, 1]. With referen e to aminimization problem, the values p(p)k

= argmini=1,...,k

[Φ(f (p)

i

)] and gk

=

arg

min

i = 1, ..., k; p = 1, ..., P

[Φ(f (p)

i

)] are the so- alled personal and globalbest solutions, namely the best positions found by the p-th parti le and by the wholeswarm until iteration k, respe tively.As far as the iterative optimization is on erned (Fig. 3), starting from guess valuesof f (p)

0and v

(p)0 , p = 1, ..., P , the positions and velo ities of the parti les are updateda ording to Eqs. (29) and (30).The main advantages of the PSO if ompared to either the GAs or the DE an besummarized in the followings:

• the simpli ity of the algorithm implementation and the use of a single operator(i.e., the velo ity update) instead of three geneti operators (i.e., the rossover, themutation, and the sele tion);• the easy manipulation of the alibration parameters [138 (i.e., the swarm size, theinertial weight, and the a eleration oe ients) whi h ontrol the velo ity updateoperator. Even if the number of ontrol parameters (i.e., the population size, the rossover rate, the mutation rate) is similar, it is ertainly easier to set the PSOindexes than evaluating the optimal setting among various operators and severaloptions of implementation;

Evolutionary Optimization as Applied to Inverse S attering Problems 17• the ability to prevent the stagnation by ontrolling the inertial weight and thea eleration oe ients to sample new regions of the solution spa e. In standardGAs and DE, the stagnation o urs when the trial solutions assume the samegeneti ode lose to that of the ttest individual. In su h a ase, the rossoverdoes not ontribute to the evolution and only a lu ky mutation ould lo ate a newindividual in other interesting region of the solution spa e;

• a smaller number of agents, whi h turns out in a redu ed omputational ost of theoverall optimization and enables a reasonable ompromise between omputationalburden and e ien y of the iterative pro ess.Although the PSO is intrinsi ally an optimizer for ontinuous spa es, a binary version ofthe algorithm exists [86, as well. In order to deal with dis rete spa es [86, the on eptsof traje tory, position and velo ity have been properly redened in terms of hanges ofprobabilities. More spe i ally, ea h dimension of the solution spa e is normalized toassume values between 0 and 1. In su h a spa e, the velo ity is onstrained to the samerange of variation and its value gives the probability threshold for having a binary allelewith zero or one value. The new allele is then omputed as followsf

(p)n,k+1 =

1 if r < Λ

(v

(p)n,k+1

)

0 otherwise(31)by dening a suitable transformation fun tion Λ

(v

(p)n,k+1

) usually onsisting of a sigmoidΛ(v

(p)n,k+1

)=

1

1 + exp(−v(p)

n,k+1

) . (32)4.2.2. Ant Colony Optimizer - The ACO is a population-based global optimizationalgorithm inspired by the foraging behavior of ant olonies looking for food sour es [52.The ants move in the spa e surrounding the nest looking for the best (shortest) pathbetween the food sour es and the nest. Likewise the PSO, the ACO is based on the on epts of swarm intelligen e and ooperation, but it also exploits the paradigm ofstigmergy and self-organization [64. In this sense, the a tivity of ea h agent f (p)

kor antin the olony Fk is guided not only by the work in progress (the goal of optimization),but also from the information available in the lo al environment. To modify the lo alenvironment, ea h ant leaves a hemi al substan e alled pheromone while movingwithin the solution spa e along a path. The amount of pheromone on a path quantiesits degree of optimality, but it de ays with time (evaporation me hanism). These

18 A. Massa et al.me hanisms allow one to avoid poor food sour es on one hand and on the other toe iently sample the whole solution spa e.The rst implementation of the ACO [52 was originally developed for dis reteoptimization problems and it was applied to solve omplex ombinatorial problems[53[95. In its basi version (Fig. 4) on erned with the sear h of a path within adis rete spa e (e.g., in the Traveling Salesman Problem [53), ea h ant odes a ve torf (p)

krepresentative of a set of dis rete symbols or lo ations, f (p)

k= a1, ..., aN. Let ussuppose ψijk be the amount of pheromone on the edge between the lo ation ai and aj ,

i, j ∈ [1, N ], with i 6= j. Every ve tor, f (p)

k, p = 1, ..., P , is randomly initialized at therst iteration (k = 0) and a uniform level of pheromone is assigned to ea h path withinthe sear h spa e, ψij0 = cost. Su essively, the pheromone level of ea h edge of the path overed by the p-th ant is updated

ψijk+1 = ψijk +

P∑

i=1

δψijk , f

(p)

k

H

Φ(f (p)

k

) , ∀ψijk (33)where δ ψijk , f (p)

k

= 1 when ψijk ∈ f (p)

k(i.e., the path rossed by solution f (p)

k ontainsthe bran h individuated by ψijk ) and δ ψijk , f (p)

k

= 0, otherwise. Moreover, H is a realpositive onstant term. The evaporation pro edure takes pla e to redu e the amount ofpheromone on ea h path of the graph

ψijk+1 = (1 − ρ) ψijk+1, ∀ψijk (34)

ρ ∈ (0 , 1] being a parameter aimed at ontrolling the evaporation rate. At ea hiteration, the probability to move towards a new position aj within the graph leavingthe position ai is given bypψ(ψijk+1

)=

ψijk+1∑j Ξ(ψijk+1

) (35)where Ξ(ψijk+1

)= ψijk+1 if ψijk+1 is a physi ally-admissible path and Ξ

(τ ijk+1

)= 0,otherwise.The ACO has been also extended to the optimization in ontinuous spa es[12[148[149. The Continuous ACO (CACO) onsiders a solution ar hive where Y > Psolutions are stored and used to generate new solutions. The value Y depends uponthe problem omplexity and ea h solution of the ar hive is identied by its tness tothe problem at hand, Φ

(p)k = Φ

(f (p)

k

), p = 1, ..., Y . At ea h iteration, a new set of Psolutions is probabilisti ally-generated and added to the solution ar hive. The Y + P

Evolutionary Optimization as Applied to Inverse S attering Problems 19solutions are ranked from the best (p = 1) to the worst (p = Y ) a ording to the orresponding tness. Su essively, the worst P solutions are then removed form thear hive.The new solutions are obtained sampling a suitable probability density fun tions,Θk. Usually, the probability density fun tion is a weighted sum of N dimensionalGaussian fun tions

Θk (fn) =

Y∑

p=1

w(p)k

1

ς(p)n,k

√2πexp

(fn − f

(p)

n

)2

2(ς(p)n,k

)2

, n = 1, ..., N (36)where the mean value f (p)

n,k and the standard deviation ς(p)n,k are given by f

(p)

n = f(p)n,k,

n = 1, ..., N , and ς(p)n,k = ρ

∑Yi=1

˛

˛

˛

a(i)n,k

−a(p)n,k

˛

˛

˛

Y−1. Moreover, the weights of the Gaussianfun tions are dened as

w(p)k =

1

Y√

2πexp

[− (p− 1)2

2 (Y )2

], p = 1, ..., P (37)

ǫ being a parameter modeling a kind of onvergen e pressure me hanism. When issmall, the best-ranked solutions are preferred, while when it is large, the probabilitybe omes more uniform.4.3. Hybrid OptimizationEvolutionary algorithms are known as robust optimization te hniques able to ee tivelyexplore wide parameter spa es. However, EAs generally require a high number of ost fun tion evaluations to onverge, thus oering redu ed performan es in termsof omputational e ien y when ompared to deterministi optimization te hniques.However, whether the evaluation of the ost is omputationally fast, EAs are stillvery good andidates for a su essful solution of the problem at hand espe ially whenlo al minima are present. Otherwise, when the evaluation of the ost fun tion is umbersome, dierent approa hes have been proposed to make EA-based pro eduresmore ompetitive still maintaining their positive features. On one hand, suitableen odings (as shown in Se t. 4.1.1) allow a redu tion of the dimension of the solutionspa e. On the other hand, to save omputational resour es and to in rease the onvergen e rate, an ee tive strategy is the hybridization [14. As a matter of fa t,gradient-based minimization te hniques [119 usually onverge very fast and yield goodresults dealing with onvex fun tionals. However, they an be trapped in lo al minima

20 A. Massa et al.in highly nonlinear problems. In order to exploit omplementary advantages, EA-based pro edures and deterministi methods an be oupled a ording to the followingstrategies:• the EA/CG-based approa h;• the Memeti Algorithm (MA).4.3.1. EA/CG-based approa h - The simplest and more general way to realize ahybridized version of an EA is that of onsidering a two-stage optimization. Firstly,the optimization is performed with an EA and subsequently the algorithm swit hes toa deterministi pro edure or vi eversa.In [166, a mi ro-GA (µGA) has been oupled with a deterministi methodproposing a ommuni ation riterion for stopping the sto hasti algorithm and invokingthe deterministi optimizer. Moreover, a hybrid optimization method ombining the GAand the Levenberg-Marquardt algorithm (LMA) has been proposed in [169. The LMAis used to lo alize a minimum and the minimization pro edure swit hes to the GA inorder to limb lo al minima. Another hybrid pro edure based on a RGA-based strategyhas been presented in [24 where the global sear h approa h is onsidered to lo atethe attra tion basin of the global optimum while the CG approa h is used to rea h theglobal optimum within the same attra tion basin. Whether the onvergen e threshold isnot rea hed during the deterministi minimization, the RGA restarts with a populationwhose individuals are randomly-generated around the urrent optimal solution.The main drawba k of these approa hes is the need to evaluate the quality of aminimum and/or the loseness of the trial solution to the attra tion basin of the globalminimum. This requires either an a urate knowledge of the ost fun tion, generallynot available, or a heuristi denition of the degree of a ura y of ea h trial solution.To over ome the drawba ks, a more sophisti ated approa h has been presented in[97 whi h onsiders a loser oupling between sto hasti and deterministi optimizers.The oupling is obtained by means of a step-by-step optimization (SbS − GA) whereonly the best individual of ea h population undergoes a deterministi optimization fora xed a limited number of intermediate iterations or until a stationary ondition holdstrue. Su essively, standard geneti operators are exe uted on the whole population ofsolutions.

Evolutionary Optimization as Applied to Inverse S attering Problems 214.3.2. Memeti Algorithm - Likewise the step-by-step hybridization, Memeti Algorithms have been introdu ed to dene a loser oupling between sto hasti anddeterministi optimizers for enhan ing the omputational e ien y of EAs. Unlike theSbS − GA, a stronger oupling between the sto hasti approa h and the deterministi te hnique is obtained by introdu ing a geneti operator whi h performs a gradient-likebased minimization (e.g., hill- limbing operator [158, G-Bit improvement [69).The MA is a hierar hi al algorithm based on the on ept of meme [46. A memeis a unit of information transmitted when people ex hange ideas. Ea h idea is a trialsolution f (p)

k, omposed by a set of memes. Sin e ideas are pro essed before propagatingthem, ea h individual an be assumed as a lo al minimum/maximumof the ost fun tion

Φ. From an algorithmi point of view [109[103, the pro essing of an idea is simulatedby means of a deterministi pro edure and its propagation and/or evolution with asto hasti GA-based, or more generally EA-based, te hnique a ording to the ow hartshown in Fig. 5. In mode detail, after the initialization (k = 0) ea h individual,f (p)

k, p = 1, ..., P , is onsidered as initial point for a lo al optimization pro edure inorder to obtain a population of lo al optima, Fk+1 =

f

(p)

k+1; p = 1, ..., P

. Afterwards,an iterative loop is performed where global and lo al sear h algorithms are iterativelyapplied to the whole population until a onvergen e riterion is satised. Furthermore,in order to assure a fast onvergen e and to preserve the hara teristi s of the bestindividual, the elitist strategy is generally adopted.As ompared to other EAs, the MAs exhibit some interesting features. Sin e thepopulation is only omposed of lo al optima, the individuals move from one minimumto another. Therefore, a limited number of iterations is usually required to onvergealso with a small population. On the ontrary, although very ee tive in terms of onvergen e rate, the main drawba k of MAs is the una eptable omputational loadwhen the number of unknowns is large and the fa t that they need lo al minimization,where the knowledge of the gradient of the fun tional is often a must.5. Evolutionary Algorithms - Theoreti al Ba kground5.1. Convergen e Analysis and Control-Parameter SettingIn this se tion, a theoreti al analysis pointing out some interesting issues related to the onvergen e behavior of EAs is dis ussed and properly referen ed. Some hints on theinuen e of the various operators on the algorithm behavior as well as some indi ations

22 A. Massa et al.on the values of the ontrol parameters are given.The study mainly fo uses on GAs and PSO as ben hmark algorithms based ondierent evolutionary on epts: the survival of the ttest and the exploitation of swarmintelligen e, respe tively.5.1.1. Geneti Algorithms Several theoreti al analyses on the GAs behavior are basedon the on ept of s hemata, originally introdu ed by Holland [75 to identify any partialstring pattern among those available in the sear h spa e that an be pro essed by theGA. In [75, a lassi al binary GA, with reprodu tion, roulette-wheel sele tion, single-point rossover and mutation was onsidered to point out the law for either the growthor de ay during the optimization pro ess of some string patterns. In order to illustratethe s hemata theorem, let us onsider the following example.With referen e to a population with l = 5 bits hromosomes, the s hemata ∗0 ∗ 11is hara terized by xed alleles (i.e., the se ond, the fourth, and the fth) and somedon't are positions (i.e., the rst and the third). All possible s hemata within apopulation are expressed in terms of a three letters alphabet A+ = 0, 1, ∗. Thetotal number of admissible s hemata is equal to (2 + 1)l, while the number of s hematawithin a population of P individuals an range from 2l up to P ×2l sin e ea h allele of a hromosome an assume the a tual value 0/1 or the don't are symbol. Ea h s hematais identied by two quantities: the order, o (·), and the length, δ (·). The s hemata orderis equal to the number of xed alleles within the s hemata. The length of a s hematais the distan e between the rst and last position with xed alleles. For example, thetwo s hemata s(t) = 0 1 ∗ ∗ 1 and s(h) = ∗ ∗ 0 1 ∗ have order and length equal too(s(t))

= 3, o (s(h))

= 2 and δ (s(t))

= 4, δ (s(h))

= 1, respe tively.The ee ts of geneti operators LGA on the survival of a s hemata during theevolution of the population have been arefully analyzed in [75 and [69. Summarizing,the number m (s, k) of o urren es of a s hemata s within a population at the k iterationin reases/de reases proportionally tom (s, k) ∼= m (s, k − 1)

Φav (s)1P

∑Pp=1 Φ

(p)k−1

[1 − δ (s)

l − 1pC − o (s) pM

] (38)where Φav (s) is the average tness of the individuals of the population ontaining thes hemata s and the ondition pM ≪ 1 is assumed.It is worth pointing out that when rossover and mutation are not used and theindividuals dire tly reprodu e throughout the generations only on the basis of the

Evolutionary Optimization as Applied to Inverse S attering Problems 23proportional sele tion, the ee t of repli ation leads to an exponential growth/de ayof s hemata having an average tness above/below the average tness of the wholepopulation. By supposingΦav (s) = (1 + ι)

1

P

P∑

p=1

Φ(p)k−1 (39)and the per entage ι onstant during the optimization pro ess, it turns out that

m (s, k) ∼= m (s, 0) (1 + ι)k (40)m (s, 0) being the o urren e of the s hemata s within the initial population. Equation(40) points out the exponential ee t of the geneti pressure on the onvergen e of theGA [44. On the other hand, a s hemata survives to rossover and mutation when thefollowing ondition holds true

δ (s)

l − 1pC + o (s) pM < 1 (41)where (1 − δ(s)

l−1pC

) and (1 − pM)o(s) ∼= 1− o (s) pM sin e pM ≪ 1 are the rossover andmutation survival probabilities, respe tively. To satisfy (41), the values usually adoptedfor the probability of mutation and rossover are pC ∈ [0.5, 0.9] and pM ∈ [0.001, 0.1].The results of su h analysis dene the so- alled S hemata Theorem or FundamentalTheorem of GAs whose main out ome is that short, low-order, above-average s hematare eive exponentially in reasing trials in subsequent generations. In [69, Goldberg alsoformulated the Building Blo ks Hypothesis by stating that the GA solution onverges tothe portion of the sear h spa e oded by the building blo ks omposed by high-t, short,and low-order s hemata whi h have low probability of being disrupted by rossover andmutation.Further studies have been su essively arried out to give some indi ations on the onvergen e of the GAs to the optimal solution Φbest. In [139, a probabilisti analysison the onvergen e of a anoni alGA is presented. The algorithm is des ribed through aMarkov hains model and it is aimed at assessing the onverge ondition on the sequen eof trial solutionslimk→∞Pr

Φoptk = Φbest

= 1 . (42)By onsidering a proportional sele tion me hanism and without elitism, it has beendemonstrated that the anoni alGA never onverges to the global optimum. As a matterof fa t, it has been proved that there is a non null probability that, whatever the initial

24 A. Massa et al.distribution of the population F0, the algorithm is able to nd a solution with tnessvalue Φoptk < Φbest, k → ∞. In this sense, it turns out that the S hemata Theorem [75does not imply the onvergen e to the global optimum in stati optimization problems.However, it has been also shown [139 that the elitism an assure global onvergen esin e the transition time between whatever two states/solutions of the solution spa eis nite and the global solution an be found at least one in an unlimited run of thealgorithm.The theoreti al analysis arried out by Qi and Palmieri has onsidered, rstseparately and then in a unied framework, the ee ts of the proportional sele tion, themutation [122 and the rossover [123 with the assumption of an innite populationsize (i.e., F =

f (p); p = 1, ..., P ; P → ∞

) over ontinuous spa es (i.e., f ∈ R).In this sense, the whole solution spa e is sampled by agents and, thanks to thishypothesis, the distribution of the population an be modeled with a sequen e of ontinuous probability density fun tions Θ (Fk), k = 1, ...,∞, instead of using dis retedistributions. As far as the geneti operators are on erned, it has been shown in [122that the sele tion tends to on entrate the individuals around the ttest solution (i.e.,the global optimum) a ording to the geneti pressure proportional to the value of boththe density fun tion Θ (Fk) and the tness fun tion Φ (Fk). Su h a me hanism alsojusties the ee tiveness of GAs in dealing with multimodal fun tionals hara terizedby multiple global optima. On the opposite, the mutation spreads the distributionobtained after sele tion proportionally to the onvolution between the mutation densityand the distribution of the population [122.Be ause of the innite dimension of the population, the sele tion operator byitself guarantees the onvergen e to the global solution without the need of mutation.However, the use of mutation is mandatory in real optimization problems when nitepopulations are used sin e it enables the exploration of new regions of the solution spa e.As regards to rossover (either single-point, multi-point, or uniform [48), theanalysis in [123 shows that it is able to nd new solutions in a smarter way as omparedto mutation thanks to a good trade-o between exploration and exploitation apabilities.As a matter of fa t, the rossover is able to diversify the population. Its iteratedappli ation redu es the orrelations among the solution parameters while maintainingthe marginal distribution of ea h unknown unaltered and equal to that of the initialpopulation (i.e., epistasis theorem [123).Still on erned with the onvergen e issue, a Markov hain analysis based on the

Evolutionary Optimization as Applied to Inverse S attering Problems 25S hemata Theorem has been sket hed in [57[153 where an innite number of iterationsis onsidered. Moreover, a onvergen e analysis with an innite population size in alsodis ussed in [159. Furthermore, the ee ts of rossover have been thoroughly analyzedin [162.Under geneti drift onditions § and in the ase of a simple GA when mutation iseither applied or not, the results in [110, numeri ally assessed in [3 through omputersimulations, indi ate that the mean onvergen e time grows proportionally to thepopulation size. Con erning the mutation, an optimal value has been identied toallow all the solutions being explored with the same probability. Other empiri al resultsabout geneti drift for dierent versions of GAs an be found in [67[78.Many other studies on the GA onvergen e and properties an be found in thestate-of-the-art literature and are urrently under development. The interested readeris referred to the spe ialized literature for a more omplete dis ussion of these issues.5.1.2. Dierential Evolution - Sin e the main obje tive of DE is to improvethe onvergen e rate of GAs, the main theoreti al eorts have been addressedtowards the optimal hoi e of the parameters ontrolling the evolution. This fa tis onrmed by several works on this topi published in the referen e literature (see[63[94[171[168[121[133 and referen es therein).Sin e the basi idea of DE is to adapt the sear h step inherently along theevolutionary pro ess to have a suitable trade-o between exploitation and explorationand the s ale of the perturbation ve tors is roughly proportional to the extent of thepopulation diversity, the ontrol parameters should allow large perturbations at thebeginning of the evolution pro ess when parental individuals are far away to ea h other.When the evolutionary pro ess pro eeds to the nal stage, the population must befor ed to a small region around the attra tion basin of the global optimum throughsmall perturbations. As a result, the adaptive sear h step would benet the evolutionalgorithm by performing global sear h with a large perturbation step at the beginningof the optimization and rening the population with a small sear h step at the end.In su h a framework, although [151 states that the strategy parameters for theDE are not di ult to hoose, there are not general rules for hoosing the DE ontrol oe ients. Moreover, even though there are only three parameters to set,§ The random drift of the gene frequen y is aused by the probabilisti generation of su essivepopulations. It models the highlighting of genes with parti ular values.

26 A. Massa et al.the appli ation of DE on several test fun tions as in [63 showed that nding the globaloptimum is very sensitive to the hoi e of the ontrol variables: P (population size), ε(ampli ation fa tor), and pC ( rossover probability). Notwithstanding, the followingrules of thumb have been given in [63:• a population size between Pmin = 3 ×N and Pmax = 8 ×N ;• a good initial hoi e for the ampli ation fa tor ε = 0.6 to be in reased if onesuspe ts that this setting auses the trial solution being trapped in a lo al optimum.As a matter of fa t, a larger ε in reases the probability for es aping a lo al optimum,although for ε > 1 the onvergen e rate de reases sin e it is more di ult to rea hthe global solution when the perturbation is longer than the distan e between twoindividuals;• a large pC often speeds-up onvergen e, but from a ertain threshold value upwardsthe population may onverge prematurely and stagnate. A good hoi e, whateverthe ost fun tion at hand, seems to be a value between 0.3 and 0.9.Besides a areful analysis on the sensitivity of the DE optimization to the values ofthe ontrol parameters, innovative operators have been also introdu ed by exploitinggeometri al relationships to further speed up the onvergen e (e.g., trigonometri mutation [60).5.1.3. Parti le Swarm Optimization In [42, Cler and Kennedy examined in detailsthe behavior of the PSO and dened some onditions on the PSO parameters to avoida divergent sear h. With referen e to a simplied one-dimensional (i.e., N = 1) anddeterministi (C1r1 = C1 and C2r2 = C2) model, des ribed by the following updatingequations

vk+1 = vk + ϕ (t− fk)

fk+1 = fk + vk+1 (43)where ϕ = C1 + C2 and t = C1p+C2gC1+C2

is the index related to both the ognitive andthe so ial term and by supposing the personal best and global best position xed (i.e.,pk = p and gk = g), it has been shown that when ϕ ≥ 4, the parti les diverge as afun tion of k, while when 0 < ϕ < 4 the traje tories are os illating around the positiont [112 with y li or quasi- y li behavior depending on ϕ. These on lusions have been

Evolutionary Optimization as Applied to Inverse S attering Problems 27drawn from the analysis of (43) re-arranged in matrix form as follows: Fk+1 = MFkwhere Fk = [vk, zk]T , being zk = (t− fk), and the dynami matrix is given by

M =

[1 ξ

−1 1 − ξ

]. (44)As a matter of fa t, it turns out that Fk = MkF0, F0 being the initialization ve tor. Asu ient ondition to rea h an equilibrium point at the onvergen e (i.e., t) is that theamplitudes of the two eigenvalues of M are lower than unity [157. However, a random hoi e of ϕ auses the un ontrolled in reasing of the velo ity term vk+1 [87.Further developing the approa h based on the generalized matrix, it has been provedthat the following onstri tion system

vk+1 = χ [vk + C1r1 (p− fk) + C2r2 (g − fk)]

fk+1 = fk + vk+1 (45)where χ = 2

|2−ǫ−√ǫ2−4ǫ| = 0.7298 with ϕ = 2C1 = 2C2 = 4.1 guarantees the stability ofthe optimization pro ess.Other variants of the PSO exist and a areful analysis about the onvergen e takinginto a ount the randomness of the algorithm has been reported in [120.Con erning the optimal hoi e of the ontrol oe ients, it is still worthwhile topoint out that sin e higher values of ω produ e relatively straight parti le traje tories,resulting in a good global sear h hara teristi , while small values of ω en ourage alo al sear hing, some resear hers have gained advantage from a de rease [56[147 or arandom variation of ω during the iterations [58. As regards to the oe ients C1 and

C2, they are usually set to 2.0 as re ommended by some papers in the PSO literature[85[87[146 and found through experimentation in several optimization elds [15.5.1.4. Ant Colony Optimization - A rst proof on the onvergen e of an ACO-basedalgorithm, named graph-based ant system (GBAS), was reported in [72[73 where ithas been shown that the global solution an be found at least on e throughout theoptimization pro ess. Although reliable, su h a proof does not hold true whatever theproblem and it is limited to the GBAS implementation whi h usually diers from theACO version used in inverse s attering.

28 A. Massa et al.In [104, similarities between the pheromone update me hanism and the sto hasti gradient des ent have been pointed out to show that a lass of ACO onverges to a lo aloptimum with probability equal to 1. On the same line of reasoning of [72, Stutzle andDorigo proved in [152[54 that for a lass of ACO-based algorithms with a lower boundψmin on the pheromone level, the su ess expe tan y of the optimization is equal toPrΦoptk = Φbest

≥ 1 − η, with η small as desired and lose to zero for an unlimitediterative pro ess (i.e., Pr Φopt

k = Φbest→ 1 when k → ∞). Moreover, the pheromonedeposited on the optimal path is higher than that left on others only after few iterations.A similar onvergen e proof has been also yielded in [172 by exploiting a simulatedannealing on ept and introdu ing an adaptive pheromone deposition fun tion.Besides the theoreti al works on the onvergen e issues, some eorts have been alsodevoted to the appli ation of the ACO to optimization problems not suitable for thestru ture of the algorithm itself. To des ribe this behavior, Blum and Dorigo [13 usedthe term de eption previously introdu ed by Goldberg [68 to identify unt problemsfor the GA on epts. The arising on lusions highlighted that in some ases ACO notonly rea hes a sub-optimal (lo al) solution (i.e., rst order des eption), but also that theperforman e of the algorithm an get worse (i.e., se ond order des eption). For furtherindi ations on this issue, the interested reader is referred to the exhaustive survey onthe ACO theory available in [55.6. Evolutionary Algorithms - Appli ations to Inverse S atteringIn this se tion, the appli ation of EAs to inverse s attering problems is analyzed. Fornotation simpli ity and without loss of generality, the inverse s attering problems isformulated in two-dimensions and TM illuminations are onsidered to deal with a s alarsystem of equations. The extension to the ve torial 3D problem is straightforward andit does not modify the meaning and aim of the following dis ussion.Sin e EA-based approa hes have been applied to retrieve both diele tri anddissipative s atterers as well as perfe tly ondu ting obje ts (PEC), the mathemati aldes ription of both problems as well as the analyti al expression of the arising ostfun tions used during the optimization will be summarized. The theoreti al reasonsof the ee tiveness of EAs in dealing with the ill-posedness and non-linearity of theseproblems will be dis ussed, as well.The last part of this se tion provides a representative overview, to the best of

Evolutionary Optimization as Applied to Inverse S attering Problems 29the authors' knowledge, on the solution of inverse s attering problems through EAs.Although a fair omparison among various algorithms and dierent implementations isimpra ti able due to (a) the ustomization of ea h EAs to the s attering s enario athand, (b) dierent metri s adopted to dene the ost fun tion to be optimized, and ( )dierent strategies at the ontrol level (e.g., dierent stopping riteria), a summary ofthe performan e of some EAs is reported in Tabs. I and II. More spe i ally, Table I on erns with the EA-based approa hes for qualitative imaging (i.e., the retrieval of theobje t's support and shape) of both ondu tors and diele tri s atterers. The values ofthe key omputational indexes (where available) are given and ompared: the number ofunknowns, N , the number of trial solutions at ea h iteration, P , the number of iterationsneeded to a hieve the onvergen e, Kend, and the orresponding total omputationaltime, Ttot. Analogously, the performan es of quantitative imaging (i.e., the retrieval ofthe diele tri properties within the investigation domain) te hniques based on EAs aresummarized in Tab. II.6.1. Inverse S attering of Diele tri Obje tsLet us onsider a region, alled investigation domain Di, hara terized by a relativepermittivity ǫ(r) and ondu tivity σ(r). Su h a region is probed by a set of V transverse-magneti (TM) plane waves, with ele tri eld ζv(r) = ζv(r)z (v = 1, . . . , V ), r = (x, y),and the s attered eld, ξv(r) = ξv(r)z, is olle ted at M(v), v = 1, ..., V , measurementpoints rm(v), m(v) = 1, ...,M(v), distributed in the observation domain Do.In order to ele tromagneti ally des ribe the investigation domain Di, let usintrodu e the ontrast fun tionτ(r) = [ǫ(r) − 1] − j σ(r)

ωε0, r ∈ Di (46)where ω is the working angular frequen y and the time dependen e ejωt is supposed.Under the hypothesis of a linear, isotropi , and non-magneti propagation medium, thes attered eld ξv(r) is the solution of the following Helmholtz equation (see [10)

∇2ξv(r) − κ2(r)ξv(r) = −jωµ0Jv(r) (47)where κ(r) = ω

√µoǫo [τ(r) + 1] is the wavenumber. Moreover, Jv(r) is the equivalent urrent density dened within Di and radiating in free-spa e

Jv(r) = τ (r)Ev (r) (48)

30 A. Massa et al.Ev being the ele tri eld in the presen e of the s atterer (i.e., the total eld). Byimposing that ξv(r) satises the Sommerfeld's radiation onditionlimr→+∞

√r

(∂ξv(r)

∂r− jκ(r)ξv(r)

)= 0 , (49)the solution of (47) in a two-dimensional s enario is given by the following Lippmann-S hwinger integral equations [35

ξv(rm(v)

)=(

2πλ

)2 ∫Diτ (r′)Ev (r′)G2D

(rm(v)/r

′) dr′ , rm(v) ∈ Do , (50)ζv (r) = Ev (r) −

(2πλ

)2 ∫Diτ (r′)Ev (r′)G2D (r/r′) dr′ , r ∈ Di , (51)where λ is the ba kground wavelength. Moreover, G2D (r/r′) is the two-dimensionalfree-spa e Green's fun tion given by

G2D (r/r′) = −j4H

(2)0

(2π

λ‖r − r′‖

), (52)

H(2)0 being the se ond-kind zeroth-order Hankel fun tion.Inverse s attering te hniques are aimed at re onstru ting the obje t fun tion τ(r)in Di starting from the knowledge of ξv (rm(v)

), rm(v) ∈ Do, and ζv (r). Unfortunately,the arising problem is non-linear and ill-posed [10. Moreover, a losed form solutionof the integral equations in (50) and (51) does not generally exist. Consequently, theinverse s attering problem has to be reformulated and ee tive inversion methodologieshave to be employed.Sin e analyti al solutions are rarely available, a numeri al solution is then lookedfor. For instan e, equations (50) and (51) are dis retized a ording to the point-mat hingversion of the Method of Moments [137. The investigation domain Di is partitionedinto N square sub-domains Dn entered at rn, n = 1, ..., N . In ea h sub-domain, a pulsebasis fun tion is denedBn (r) =

1 if r ∈ Dn

0 if r /∈ Dn

, (53)and the ontrast fun tion turns out to be expressed as followsτ (r) =

N∑

n=1

τnBn (r) , r, rn ∈ Di (54)where τn = τ (rn), n = 1, ..., N . By assuming the in ident eld ζv and the total eldEv onstant inside ea h sub-domain Dn, the dis rete form of the Lippmann-S hwingerequations is given by

ξvm(v)

(rm(v)

)=∑N

n=1 τnEvn (rn)G2D

(rm(v)/rn

), rm(v) ∈ Do , (55)

Evolutionary Optimization as Applied to Inverse S attering Problems 31ζvn (rn) = Ev

n (rn) −∑N

p=1 τpEvp

(rp)G2D

(rn/rp

), rn ∈ Di , (56)where G2D (rm/rn) is the dis retized form of the two-dimensional Green's operator.In order to ope with ill-posedness, the inverse s attering problem is usuallyre ast as an optimization one dening a suitable ost fun tion proportional to themismat h between the measured elds and their numeri ally evaluated ounterparts tobe minimized. Additional regularization or penalty terms an be also added to the ostfun tion in order to enhan e the reliability of the inversion pro ess. The ost fun tionalis a fun tion of the trial solution f =

τn, E

vn; n = 1, ..., N

and it an be expressed inmatrix form as follows [10Φf

= αPV

v=1‖ξv−Gv

EXTeτ eE

v‖2

PVv=1‖ξv‖2 + β

PVv=1‖ζv− eE

v+Gv

INTeτ eE

v‖2

PVv=1‖ζv‖2 (57)where Gv

EXTand Gv

INTare the M ×N external Green's matrix and the N ×N internalGreen's matrix, respe tively. Moreover, α and β are two user-dened regularizationparameters. Furthermore, ζv is the N × 1 in ident eld array, the M × 1 entries of ξvare given by the measured s attered eld samples, and E

v is the N × 1 array of theestimated total ele tri eld.The a tual solution f opt is looked for as the N × 1 trial array that minimizes the ost fun tion (57)fopt = argmink=1,...,K

[Φfk

] (58)where fk

=τk, E

v

k

is the trial solution at the step k-th iteration of the optimizationpro edure.6.2. Inverse S attering of Perfe t Ele tri Condu torsWhen dealing with PEC hara terized by a ondu tivity σ → ∞, Equation (50)modies as followsξv(rm(v)

)= −ωµ0

4

∮γJvs (r′)G2D

(rm(v)/r

′) dr′ , rm(v) ∈ Do , (59)Jvs (r) being the surfa e urrent density dened only on the boundary γ of the unknowns atterer. Sin e the following ondition holds true on the surfa e of the PEC

ξv (r) + ζv (r) = 0 , r ∈ γ, (60)the s attering equation is given byζv (r) = ωµ0

4

∮γJvs (r′)G2D (r/r′) dr′ , r ∈ γ , (61)

32 A. Massa et al.and the unknown urrent Jvs (r) des riptive of the s atterer (i.e., the ontour γ) is omputed through the inversion of the linear system (61) starting from the knowledgeof ζv (r). Likewise the inversion of diele tri s atterers, the re onstru tion pro ess isre ast as the minimization of the following ost fun tionΦγ

=PV

v=1‖ξv−Gv

EXTJ

v

s(γ)‖2

PVv=1‖ξv‖2 (62)where γ is the parametrized representation of the s atterer ontour γ.Be ause of the non-linearity of the s attering problem and the presen e of lo alminima (i.e., false solutions of the inverse s attering problem) in the ost fun tion (57)or (62), the quality and the reliability of the nal solution mostly depends on theee tiveness of the sear h strategy.6.3. EAs-based Approa hes for Inverse S atteringThe rst multiple-agent evolutionary te hniques applied to solve mi rowave inverses attering problems were the geneti algorithms. Chiu and Liu in [39 applied the

BGA for the 2D inversion of a PEC ylinder illuminated by an in ident TM-polarizedplane wave. The 2D surfa e-re onstru tion problem has been reformulated into a mono-dimensional one by des ribing the ontour of the ylinder as a fun tion of the polar angleθ

γ (θ) =

M/2∑

m=0

Amcos (mθ) +

M/2∑

m=1

Bmsin (mθ) (63)with θ ∈ [0, 2π], where the unknowns to be determinedf =

A0, A1, AM

2, B1, B2, ..., BM

2

(64)are the real oe ients of the Fourier series expansion. The number of unknownparameters was set to N = M + 1 = 9 and various experiments onsidering stringsof length l = 8 × N and l = 10 × N have been performed to validate the EA-basedinversion method. As far as GA parameters are on erned, a population of P = 300individuals was hosen with pC = 0.8 and pM = 0.04. Su essively, the sensitivityof the re onstru tion on the GA parameters has been analyzed in [40 by the sameauthors under TE illuminations. The out omes have been that for this kind of problemsa suitable hoi e of the GA parameters was: a population dimension in the rangeP ∈ [300, 600], a hromosome length of l ∈ [8, 16]×N bits, and probabilities of rossover

Evolutionary Optimization as Applied to Inverse S attering Problems 33and mutation in the following ranges 0.7 < pC < 0.9 and 5 × 10−4 < pM < 5 × 10−2,respe tively. Su h a GA-based inversion pro edure was extended in [41 to image lossyor imperfe tly ondu ting ylinders. More spe i ally, the GA was used to retrieve alsothe ondu tivity of the unknown s atterer by oding both the Fourier oe ients of theshape and the value of the ondu tivity of the obje t.Following the guidelines in [39 for dening the inversion problem, an approa hbased on a mi ro-GA has been presented in [79 to enhan e re onstru tion and onvergen e performan es of standard GAs. A number of N = M + 1 = 5 unknownswas onsidered to des ribe the ontour of the s atterer through (63) and a population ofP = 5 individuals with l = 12×N hromosomes was used. The main advantages of theµGA are that it employs a small population, thus redu ing the overall omputationalburden, and assures a fast onvergen e to sub-optimal solutions while maintainingsuperior sear h ability.Takenaka and o-workers [154 proposed a volume-re onstru tion approa h toestimate widths and lo ations of parallel strips in 1D and 2D problems without any a-priori information on the number of strips. The investigation domain Di was dis retizedin N = 20, 36 ells and either an empty (ai = 0) or o upied (ai = 1) state wasassigned to ea h ell. If ai = 1, the ith ell is o upied by a metalli strip, ai = 0otherwise. It is worth pointing out that, the original problem was reformulated as thedenition of a binary map to allow a straightforward use of the BGA. The populationdimension was set to P = 50 with xed rossover probability pC = 0.8 and variablemutation probability in the range pM ∈ [0.01, 0.5]. The approa h was extended in[100 to retrieve lo ations and two-dimensional ross se tions of ondu ting ylinders.To improve the onvergen e rate for a 2D dis retization of the s enario under test, a ustomized rossover, alled re tangular blo k rossover, was developed to e ientlydeal with a binary re onstru tion map. Moreover, a larger population with P = 200individuals was onsidered be ause of the wider solution spa e (N = 225). In [173, asimilar BGA-based approa h has been tested against Ipswi h experimental data-set tore onstru t metalli obje ts. The investigation domain Di was dis retized into N = 400 ells and the GA parameters were set to P = 100, pC = 0.8, and pM = 0.2.To avoid the quantization error related to the dis retization of the real oe ientsin (63), Qing and o-workers proposed in [124[125[127 a strategy based on a RGA.For omparative purposes, some ben hmark examples previously addressed in [39 withthe BGA have been onsidered. An experimental validation of the method has been

34 A. Massa et al.presented in [128, as well.In an alternative fashion, the ontour of the ondu ting ylinders has beenapproximated in [129 by means of lo al shape fun tions mathemati ally expressed interms of losed ubi B-splinesγ (θ) =

M−1∑

m=0

ρm

(M

2πθ −m

), θ ∈ [0, 2π] (65)where ea h segment ρm is a linear ombination of four ubi polynomials Qi (t),

i = 0, ..., 3, as followsρm (t) = pm−1Q0 (t) + pmQ1 (t) + pm+1Q2 (t) + pm+2Q3 (t)

pm−1, ..., pm+2 being the ontrol points. In this ase, the parameters to be optimized arethe set of ontrol pointsf = p0, ..., pM−1 . (66)Dealing with these problems, the RGA was used with probability oe ients equal to

pC = 1.0 and pM = 0.1. More re ently, the representation of PEC ontours using ubi splines has been extended to deal with three-dimensional (3D) ele tri ally large ondu ting pat hes [141.Previous referen es and the obtained results point out that theGA-based te hniqueshave demonstrated to work ee tively in retrieving strong s atterers in free-spa ethrough the minimization of the mismat h between the measured and the re onstru teds attered eld. In addition, the robustness of the GA in su h a framework has beenproved sin e the onverge to the global optimum has been obtained with high probabilitydespite a rough initialization of the iterative pro ess. More re ently, an innovativestrategy based on Geneti Programming (GP ) [88 has been presented in [163. A newgeometry-en oding s heme was introdu ed and a tree-shaped hromosome was used todes ribe the shapes of the ylinders as the union and subtra tion of onvex polygons.In [65, the binary GA was also applied to both the dete tion of ir ular ondu ting ylinders buried in an homogeneous diele tri medium and the diele tri prole retrievalof layered media. A similar approa h has been onsidered in [101 to re onstru t theele tri al parameters (ǫi, σi, µi, i = 1, ...,M) of a multilayered radome of nite size ∆.In this ase, the hromosome was the binary representation of the following unknownarrayf = ǫ1, σ1, µ1, d1, ..., ǫM , σM , µM , dM (67)

Evolutionary Optimization as Applied to Inverse S attering Problems 35being ∑Nn=1 dn = ∆. To in rease the a ura y of the re onstru tion and improve the onvergen e rate, an adaptive hromosome stru ture was hosen in order to iterativelyadjust the existen e range of ea h parameter.An inverse s attering te hnique for the dete tion of perfe tly ondu ting ylindri alobje ts buried in a half-spa e have been des ribed in [38. An improved steady-state

GA (SS − GA) ‖ was used to redu e the omputational burden and a non-uniformprobability distribution was introdu ed to ontrol the generation of new individualsthrough rossover and mutation. The shapes of the buried obje ts have been represented onsidering both Fourier series (63) and Cubi -splines (65) representations. Ea hunknown was oded with l = 20 bits string and a population of P = 100 individualswas hosen. Moreover, the following setup was hosen: pC = 0.05 and pM = 0.5. If ompared to standard GA, the values of the ontrol parameters, turn out dierent inmagnitude. This is due to the steady-state GA implementation sin e only a portion ofnew individuals is generated through rossover and mutation, while the whole populationis updated in standard GA. In [38, it has also been veried that even for an initialguess far away from the optimal solution the omputational ost to rea h the globalsolution is mu h less in SS − GA than for simple GAs. Moreover, a further redu tionof the omputational time was obtained running the optimization pro ess in parallel ona multipro essor luster system.In the framework of subsurfa e imaging, a GA-based approa h for the retrieval ofthe dimension and lo ation of a 3D buried obje t has been presented in [93. A parallelbinary GA pro edure has been onsidered to speed up the tness evaluation omputedthrough the FDTD (Finite-Dieren e Time-Domain) method. More in detail, the ostfun tion has been dened as the dieren e between the measured and the al ulated s11parameters on a frequen y band from νL up to νH at the port of the probing antennaΦ = 1 −

√∑νH

ν=νL

[smeas11 (ν) − scalc11 (ν)

]2√∑νH

ν=νL[smeas11 (ν)]2

. (68)As far as the GA is on erned, a population of P = 50 individuals and pC = 0.5 andpM = 0.2 were hosen.Besides shape re onstru tion problems, the re onstru tion of the diele tri properties of unknown obje ts has been fa ed in [23 by means of a quantitative (pixel‖ In steady-state GAs, only a portion of the population is updated and a suitable repla ement strategyis onsidered.

36 A. Massa et al.based) mi rowave imaging te hnique. A binary GA with Q = 256 quantization levelshas been used to des ribe the real-valued unknowns. The inverse s attering problemhas been solved in the framework of the Born approximation to redu e both the non-linearity of the des riptive s attering equations as well as the total number of unknowns.The hromosome to be optimized wasf = τ1, τ2, ..., τn, ..., τN (69)where N = 900. The simulations were arried out with a population of P = 100individuals and rossover and mutation probability oe ients were set to pC = 0.7and pM = 4 × 10−4, respe tively. Moreover, a sensitivity analysis was performedvarying the ontrol parameters in the following range: P ∈ [40, 200], 0.6 < pC < 0.8, and 4 × 10−4 < pM < 10−3. It has been proved that for small-sized populationsthe quantitative errors in rease as for either low value of pC or high pM . The sameoptimization pro edure has been validated in [115 against experimental data a quiredwhen onsidering highly- ontrasted bodies. Sin e, the Born approximation annot beapplied, the unknown ve tor was omposed of both the ontrast fun tion τ and the totaleld Ev in the investigation domainf = τ1, τ2, ..., τN , Ev

1 , Ev2 , ..., E

vN ; v = 1, ..., V . (70)Be ause of the ontinuous nature of the parameters to be optimized, a real- oded

GA has been proposed in [24 and signi antly superior performan es with respe t tothe BGA have been attained [25. The potentialities of the RGA have been furtherpointed out and the methodology extended to hybrid- oded hromosomes in order todeal with both nondestru tive testing and evaluation (NDT −NDE) problems [26 andbiomedi al imaging [29. More spe i ally, the unknowns were expressed by means ofthe following ve torf = x0, y0, L, W, θ, E

v1 , E

v2 , ..., E

vM ; v = 1, ..., V (71) oding through binary strings the values of the bary enter (x0, y0), the length (L), thewidth (W ), and the orientation (θ) of either a ra k in NDT − NDE problems or apathology in ase of biomedi al imaging. Dierently, a oating-point representationhas been used to ode the unknown eld values. Reliable values of the probability oe ients for the RGA turned out to be pC = 0.7 and pM = 0.4.In NDT − NDE problems, the appli ation of GAs starts with a BGA proposedin [2 to identify the qualitative nature (i.e., length, width, orientation, and bary enter)

Evolutionary Optimization as Applied to Inverse S attering Problems 37of a ra k on the surfa e of an obje t. In order to ta kle more omplex diagnosisproblems, an innovative des ription of the ra k based on a suitable parameter sele tion(71) as well as a more ee tive exploitation of the a-priori information has been onsidered in [116[30[7 to redu e the number of problem unknowns and enable ane ient use of HGAs. Although ee tive, these approa hes onsidered s attering ongurations hara terized by the presen e of only a single defe t. To over omesu h a limitation, two enhan ed GA-based optimization te hniques able to deal withmultiple defe ts in a diele tri host medium have been proposed in [8. Both methodsadopted a multi ra k variable-length hybrid oding. The former strategy was based ona hierar hi al implementation, whi h onsiders a set of parallel sub-pro esses, ea h onelooking for a solution with a xed number of ra ks. The other deals with a singleoptimization pro ess aimed at retrieving the best re onstru tion among dierent ra k-length solutions. Be ause of the use of an ad-ho operator to orre tly re ombine thedis rete (binary) and ontinuous part of the hromosome, the ontrol probabilities werekept onstant to pC = 0.7 and pM = 0.4 for ea h portion of the hromosome stru ture.Unlike [8, also the re onstru tion of the diele tri properties of the defe ts has beenaddressed in [9.Similar on epts have been exploited to deal with biomedi al imaging problems asdis ussed in [29. The hromosome stru ture was still hosen as a two-part variable-length string. In su h a ase, the variable-length stru ture was used be ause of thevariable number of dis retization sub-domains o upied by the pathology where theunknown eld has to be omputed.A parallel SS − GA integrated with a FDTD approa h has been presented in[167[140 for early an er dete tion. Parallel omputing was onsidered due to the large omputational burden of the FDTD-based approa h aused by the ne dis retizationof the investigation domain (N = 600 × 600 ells).As regards to the eorts devoted to in rease the omputational e ien y ofGA-based inversions, ad-ho versions or spe i operators have been designed.Representative examples of a wide literature are the use of nonuniform probabilitydensities in BGAs [37 and a paraboli rossover operator for RGAs [18.Despite the su ess of GAs-based approa hes in several area of ele tromagneti and inverse s attering, more re ently other EAs have shown better performan e. Dueto its faster onvergen e with respe t to GAs, the DE was alternatively used to fa eele tromagneti inversion and it has been rstly applied to image ir ular- ylindri al

38 A. Massa et al. ondu tors and tunnels [105[106. Later, the DE was used to solve ben hmark problems[130 and its performan es were ompared to the solutions from the RGA in [127. Apopulation of P = 5×N individuals was onsidered. Moreover, the rossover probabilityand the mutation intensity were set to pC = 0.9 and ǫ = 0.7, respe tively. Without apriori information on the number of ylinders within the investigation region, the DEstrategy with individuals in groups (GDES) has been proposed [131. The key ideaof the GDES is to organize the population into dierent groups. The individuals ofthe same group ode the same number of ylinders and have the same hromosomelength. Su essively, an innovative DE-based algorithm was proposed by Qing [132.In the dynami DE strategy (DDES), a larger (virtual) population has been hosen tospeed up the onvergen e. The new individuals generated at iteration k + 1 ompeteduring the same iteration with their parents. As a onsequen e, the mating operationturns out being more sensitive to the fast hanges of the population with a onsequentenhan ement of the onvergen e rate.Be ause of the strong dependen e of the DE performan e on both its ontrolparameters and the ost fun tion to be optimized, an extensive alibration of thepopulation size, P , the rossover probability, pC , and the mutation intensity, ǫ, has been arried out in [133 spe i ally for imaging problems on erned with PEC ylinders infree spa e. More re ently, a omparative study on the e ien y of DE and PSO whenapplied to the shape re onstru tion of PEC s atterers has been reported in [134.In [97, an approa h for the dete tion of 2D buried inhomogeneities has beendesigned by ombining two DE-based optimization te hniques. More spe i ally, theDE/1/best/bin version (pC = 0.8 and ǫ = 0.6) was used to rapidly lo ate the attra tionbasin of a minimum and su essively the algorithm swit hed to the DE/1/rand/bin(pC = 1.0 and ǫ = 0.6) to avoid the trial solution be trapped in a lo al minimum.DE has been also applied to the 3D dete tion of unexploded ordnan e (UXO) [34and lossy spheri al obje ts buried in the subsoil [6. In this latter ase, a modied DEalgorithm was onsidered where multiple populations evolve in parallel analogously to[131.More re ently, EAs inspired by the foraging behavior of swarms have proved tooutperform previous EAs in dealing with a set of imaging problems also related tohigh-dimensional ontinuous spa es.A standard PSO algorithm has been used in [145 to re onstru t 1D permittivityand ondu tivity proles in lossy and inhomogeneous media. Be ause of its ability in

Evolutionary Optimization as Applied to Inverse S attering Problems 39exploring the parameter spa e and avoiding wrong solutions thanks to the ooperativebehavior of the agents, the PSO has been also protably onsidered for there onstru tion of 2D diele tri s atterers [31[49[50[62. The enhan ed onvergen erate of the PSO with respe t to the RGA has been assessed both in [49 and [50.Moreover, the alibration of the PSO ontrol parameters arried out in [49 has provedthat the most suitable setup (in terms of de rease of the ost fun tion in a xed amountof iterations) is: ω = 0.4 and C1 = C2 = 2.0, in a ordan e with the out omes of otherpublished papers (see Se tion 5.1.3). Moreover, the ratio NP∼= 5.5 has been dedu ed asa good rule of thumb for the size P of the swarm for this lass of optimization problems.The retrieval of 3D lossy diele tri obje ts has been addressed in [80[81. Due tothe limited set of independent s attering data and the dimensionality of the problem athand, an adaptive multiresolution te hnique was integrated into the swarm evolution toredu e the sear h spa e and make more e ient the PSO-based minimization [51.In order to deal with high-dimensional spa es, an alternative approa h based on a

µPSO, employing a redu ed swarm in analogy with the µGA, has been also presentedin [83. Otherwise, to avoid the premature onvergen e of the standard version of thePSO, modied so ial stru tures have been envisaged [82. More spe i ally, besidesthe standard version of the PSO (i.e., that presented in this work), whose stru ture isknown as gbest topology sin e the information are instantaneously ommuni ated to thewhole swarm, in [82 other topologies are onsidered in order to limit the ommuni ationbetween the parti les [99 to prevent premature onvergen e.The PSO has been also su essfully applied in industrial and biomedi al imagingproblems. For example, a swarm-based re onstru tion algorithm for the dete tion and hara terization of multiple in lusions in on rete stru tures has been presented in [156.Moreover, a PSO-based te hnique for early an er dete tion has been dis ussed in [170.In the framework of swarm based approa hes, only preliminary results are available(e.g., [118) on erning the appli ation of the ACO to inverse s attering even thoughan interesting hybridization with the linear sampling method (LSM) has been re entlystudied in [20 to inspe t 3D homogeneous diele tri s atterers.As far as hybrid algorithms are on erned, a µGA and a RGA oupled with alo al sear h method have been presented in [166 and in [126, respe tively, for imagingperfe tly ondu ting ylinders. In [174, the GA was ombined with a tabu me hanismto avoid a (repetitive) sampling of poor regions within the sear h spa e. Following thesame guidelines outlined in [163, the authors ombined a GA-inspired optimization

40 A. Massa et al.with a lo al sear h method [164 redu ing the number of ost fun tion evaluations tore onstru t bowtie-shaped ondu ting ylinders from 1.17×105 [163 down to 5421. Anhybrid-GA has been also proposed in [66 to retrieve the diele tri prole of a layeredmedium.In the framework of quantitative imaging problems, the permittivity re onstru tionof 2D obje ts with large size and high ontrast has been arried out in [114 by ombininga Levenberg-Marquardt algorithm with the GA. To speed-up the onvergen e, a Polak-Ribière onjugate gradient (CG) has been merged into a global optimization loopperformed with a RGA in [24[116[117. Furthermore, a parallel implementation ofsu h a hybrid te hnique has been detailed in [98.As regards to memeti algorithms, they have been used to dete t ylindri alinhomogeneities starting from phaseless data obtained by syntheti as well asexperimental measurements [28. Moreover, a MA-based approa h has shown toee tively work for the ele tromagneti re onstru tion of buried obje ts [27[32, aswell. Be ause the heavy omputational burden, MAs appli ations are usually limitedto low-dimensionality problems.7. Summary and Con lusionsIn this paper, a review of Evolutionary Algorithms as applied to inverse s atteringproblems has been reported. After an introdu tion on the genesis of EAs (Se t. 2), themost representative and widespread evolution-based te hniques have been des ribed ina ommon framework and by means of a uniform notation (Se t. 3) to point out themain similarities and dieren es among the various implementations detailed in Se t.4. Some theoreti al hints on erning the onvergen e properties and the parameterssele tion have been also dis ussed (Se t. 5). Se tion 6 has been devoted to presentstate-of-the-art appli ations of EAs to ele tromagneti imaging problems. Su h a riti aldis ussion has pointed out that the su ess of EAs in dealing with nonlinear ill-posedinverse s attering problems mainly relies in a suitable set of answers to the followingkey-issues:• a suitable representation of the unknowns and a proper hoi e of the EA ismandatory starting from a areful analysis of the problem at hand and its numeri aldes ription (i.e., the ost fun tion). A ording to the No free lun h theorem [165,the optimum algorithm does not exist sin e the average performan e of any pair

Evolutionary Optimization as Applied to Inverse S attering Problems 41of algorithms a ross all the possible problems is identi al;• physi al onstraints need to be taken into a ount to enhan e the ee tivenessof EAs by redu ing the area of the solution spa e to be sampled during theoptimization;• the a-priori knowledge on the s enario under test needs to be protablyin orporated into both the solution representation and the evolutionary operatorsto guide the sear h pro ess and in rease its onvergen e rate;• great are must be exer ised in dening the ost fun tion sin e it represents theonly link between the physi al problem and its numeri al ounterpart. Failing su ha denition prevents the a tual solution is rea hed at the onvergen e of the EA toits global optimum;• the alibration of the evolutionary pro edures needs to be arefully performed tofully exploit the EA potential. On the other hand, it should be stressed that nosingle-test- ase alibration is ne essary, but the tuning of the ontrol parametersmust be arried out on a lass of problems (e.g., imaging of diele tri obje ts) toavoid overtting and onfer generalization features on the EA;• the feasibility and reliability of an EA must be assessed rst on a ben hmark of testfun tions and then in omparison with other deterministi and sto hasti mi rowaveimaging te hniques.8. Open Problems and New Resear h DevelopmentsAs far as the appli ation of EA-based mi rowave imaging te hniques to inverses attering problems is on erned, it should be rst pointed out that the developmentof evolutionary te hniques has re eived a great boost in the last twenty years duethe ontinuous enhan ement of the omputational apabilities of modern personal omputers, but also for their exibility and features usually very suitable to fa e withthe ill-posedness and nonlinearity of the arising optimization problem. As a matter offa t,• EAs are multiple-agent optimizers;• EAs are global and hill- limbing algorithms thanks to their sto hasti nature;• EAs allow the straightforward introdu tion of a-priori information or onstraints;

42 A. Massa et al.• EAs are able to deal with oating-point and/or dis rete and/or binary unknownssimultaneously;• EAs are intrinsi ally parallel algorithms;• EAs are easily integrated with lo al optimizers.However, some other drawba ks limit their ee tiveness besides typi al negative issuesof inverse s attering problems. For example,• the omputational burden (espe ially when moving towards 3D s enarios);• the low onvergen e rate when lose to the global solution although in its attra tionbasin;• the dependen e on the parametrization of the problem unknowns;• the sensitivity to the alibration parameters.As regards to the omputational issues, some re eipts to limit these drawba ks onsist in:(a) redu ing the number of problem unknowns by re urring to a suitable parametrizationof the s atterer under test [26[30 or onsidering a multi-resolution strategy [50[51 or amulti-stage re onstru tion [108; (b) hybridizing the EA with a deterministi optimizer[114[24[98; ( ) omputing at ea h iteration the se ondary unknowns (i.e., the elddistribution within the investigation domain) by means of fast forward solvers (see [36and the referen e therein); (d) exploiting the expli it parallelism of EAs through aparallel implementation [98.With referen e to the EA parallelization (d), whi h has been left out in themain body of the paper, it is well known that one of the most attra tive featuresof nature-inspired optimization te hniques is their parallelism that allows an ee tivesampling of the solution spa e. Besides the impli it parallelism still exploited in serialimplementations, the parallelism of an EA is also guaranteed by its multiple-agentnature. As a matter of fa t, a number of sample points equal to the population dimensionis pro essed at ea h iteration to ee tively look for the global optimum. In order tofully exploit also this hara teristi , a parallel implementation of the iterative pro edurewould fully exploit also this hara teristi enabling (i) a parallel and simultaneous sear hfrom multiple points in the solution spa e; (ii) a more e ient sear h, even when noparallel hardware is available; (iii) a higher e ien y than sequential implementation,and (iv) a speedup due to the use of multiple CPUs.

Evolutionary Optimization as Applied to Inverse S attering Problems 43Despite these envisaged advan es, it should be also pointed out that the use of aparallelized (bare/hybrid) EA is not dierent from other parallel methodologies and itse ien y largely depends upon the system ar hite ture, the parallel exe ution overhead,the number of new agents reated at ea h iteration, the population stru ture, and theparallel granularity (i.e., the CPU time of the steps being exe uted in parallel). Theseadvan es an be rea hed only if:• a stru tured population [21 is taken into a ount to obtain not only a fasteralgorithm, but also a superior numeri al optimization able to protably exploitthe multi-agent nature of the EA;• some agents do a dierent lo al sear h (de entralized lo al optimization) in order toimprove the onvergen e rate of the iterative pro ess;• the evolution pro ess expli itly keeps memory of the population evolution in orderto redu e/avoid the runtime of the ost fun tion evaluation for similar/equalindividuals;• the evolutionary operators are applied in parallel.As regards to the enhan ement of the onvergen e rate through the redu tion ofthe extension of the solution spa e to be sampled during the optimization, thenumber of iteration Kend evidently redu es in orresponden e with an in rease/e ient-exploitation of the a-priori information. Indeed, an additional information on thelo ation of the attra tion basin of the global solution usually helps the evolutionarypro edure in lo ating the a tual solution as well as the EA designer in dening theoptimal parametrization of the problem unknowns.Another way to save omputational resour es when applying EAs to inverses attering problems is to use a su ession of inversion pro edures, ea h one on ernedwith a number of unknowns smaller or equal than the information ontent of thes attering data in order to simplify the ost fun tion to be optimized. The redu tionof the omplexity of the ost fun tion an be yielded in dierent ways a ording tosome re ently developed strategies. In su h a framework, it is worthwhile to mentionmulti-resolution methods [50[51 devoted to perform an iterative syntheti zoom overthe region where the s atterer is supposed to be lo ated and multi-stage re onstru tions[108[20 where ea h inversion is aimed at identifying dierent hara teristi s of theunknown s atterer until its omplete des ription/knowledge.

44 A. Massa et al.Nevertheless, although further theoreti al and numeri al developments are stillrequired to onsider EA-based inversion te hniques mature, reliable, and e ientinversion methodologies, the expe ted impa t of su h approa hes justies signi antresear h eort to develop EA-based tools dedi ated to spe i appli ations in theframework of mi rowave imaging and NDT/NDE appli ations.A knowledgementsThe authors thank very mu h B. Symes and and Inverse Problems for the kind invitationto write this review paper and Ch. Benson and K. Watt for their ooperation andassistan e. The authors are indebted to D. Lesselier and they would like to a knowledgeuseful dis ussions on inverse s attering problems and optimization te hniques with T.Takenaka, Ch. Pi hot, A. Abubakar, O. Dorn, M. El-Shenawee, K. Belkebir, M. Saillard,I. Akduman, P. Meaney, J. Ch. Bolomey, R. Zoughi, M. Pastorino, and T. Isernia.Finally, A. Massa would like to express his everlasting gratitude to E. Vi o.

Evolutionary Optimization as Applied to Inverse S attering Problems 45Referen es[1 M. M. Ali and A. Torn, Population set based global optimization algorithms: Some modi ationsand numeri al studies, Computers and Operations Resear h, vol. 31, 1703-1725, 2004.[2 A. A. Arkadan, T. Sareen, and S. Subramaniam, Geneti algorithms for nondestru tive testingin ra k identi ation, IEEE Trans. Mag., vol. 30, pp. 4320-4322, 1994.[3 H. Asoh and H. Muhlenbein, On the mean onvergen e time of evolutionary algorithms withoutsele tion, in Parallel Problem Solving from Nature III, vol. 866, pp. 98-107, 1994.[4 T. Ba k, Sele tive pressure in evolutionary algorithms: A hara terization of sele tionme hanisms, in Pro . 1st IEEE Conf. on Evolutionary Computation. Pis ataway, NJ: IEEEPress, 1994, pp. 57-62.[5 A. L. Barker and W. N. Martin, Dynami s of a distan e-based population diversity measure, inPro . 2000 IEEE Congress on Evolutionary Computation, 16-19 Jul. 2000, vol. 2, pp. 1002-1009.[6 A. Bréard, G. Perrusson, and D. Lesselier, Hybrid dierential evolution and retrieval of buriedspheres in subsoil, IEEE Geos i. Remote Sens. Lett., vol. 5, pp. 788-792, 2008.[7 M. Benedetti, M. Donelli, G. Fran es hini, M. Pastorino, and A. Massa, Ee tive exploitationof the a-priori information through a mi rowave imaging pro edure based on the SMW forNDE/NDT appli ations, IEEE Trans. Geos i. Remote Sens., vol. 43, pp. 2584-2592, 2005.[8 M. Benedetti, M. Donelli, and A. Massa, Multi ra k dete tion in two-dimensional stru tures bymeans of GA-based strategies, IEEE Trans. Antennas Propag., vol. 55, pp. 205-215, 2007.[9 M. Benedetti, G. Fran es hini, R. Azaro, and A. Massa, A numeri al assessment of there onstru tion ee tiveness of the integrated GA-based multi ra k strategy, IEEE AntennasWireless Propag. Lett., vol. 6, pp. 271-274, 2007.[10 M. Bertero and P. Bo a i, Introdu tion to Inverse Problems in Imaging. Bristol, U.K.: IOPPress, 1998.[11 A. Bertoni and M. Dorigo, Impli it parallelism in geneti algorithms, Artif. Intell., vol. 61, pp.307-314, 1993.[12 B. Bil hev and I. C. Parmee, The ant olony metaphor for sear hing ontinuous design spa es,in Pro . AISB Workshop Evol. Comput., vol. 993 of Le ture Notes in Computer S ien e, pp.25-39, 1995.[13 C. Blum and M. Dorigo, Sear h bias in ant olony optimization: on the role of ompetition-balan ed systems, IEEE Trans. Evol. Comput., vol. 9, pp. 159-174, 2005.[14 C. Blum, M. J. Blesa Aguilera, A. Roli, and M. Sampels, Hybrid Metaheuristi s: An EmergingApproa h to Optimization. Berlin Heidelberg: Springer, 2008.[15 D. W. Boeringer and D. H. Werner, Parti le swarm optimization versus geneti algorithms forphased array synthesis, IEEE Trans. Antennas Propag., vol. 52, pp. 771-779, 2004.[16 E. Bonabeau, M. Dorigo, and G. Theraulaz, Swarm Intelligen e: From Natural to Arti ialSystems. New York, NY: Oxford Univesity Press, 1999.[17 L. B. Booker, D. E. Goldberg, and J. H. Holland, Classier systems and geneti algorithms,Artif. Intell., vol. 40, pp. 235-282, 1989.[18 E. Bort, G. Fran es hini, A. Massa, and P. Ro a, Improving the ee tiveness of GA-basedapproa hes to mi rowave imaging through an innovative paraboli rossover, IEEE Antennas

46 A. Massa et al.Wireless Propag. Lett., vol. 4, pp. 138-142, 2005.[19 M. F. Bramlette and E. E. Bou hard, Geneti algorithms in parametri design of air raft, inHandbook of Geneti Algorithms. New York: Van Nostrand Reinhold, 1991, h. 10, pp. 109-123.[20 M. Brignone, G. Bozza, A. Randazzo, M. Piana, and M. Pastorino, A hybrid approa h to 3Dmi rowave imaging by using linear sampling and ACO, IEEE Trans. Antennas Propag., vol.56, pp. 3224-3232, 2008.[21 E. Cantu-Paz, A summary of resear h on parallel geneti algorithms, Te h. Rep. 95007, IlliGalRep., 1995.[22 S. Caorsi, A. Massa, M. Pastorino, and F. Righini, Cra k dete tion in lossy two-dimensionalstru tures by means of a mi rowave imaging approa h, Int. Journ. Applied Ele tromagneti sMe hani s, vol. 11, no. 4, pp. 233-244, 2000.[23 S. Caorsi and M. Pastorino, Two-dimensional mi rowave imaging approa h based on a geneti algorithm, IEEE Trans. Antennas Propag., vol. 48, pp. 370-373, 2000.[24 S. Caorsi, A. Massa, and M. Pastorino, A omputational te hnique based on a real- oded geneti algorithm for mi rowave imaging purposes , IEEE Trans. Geos i. Remote Sens., vol. 38, pp.1697-1708, 2000.[25 S. Caorsi, A. Costa, and M. Pastorino, Mi rowave imaging within the se ond-order bornapproximation: sto hasti optimization by a geneti algorithm, IEEE Trans. AntennasPropag., vol. 49, pp. 22-31, 2001.[26 S. Caorsi, A. Massa, and M. Pastorino, A ra k identi ation mi rowave pro edure based ona geneti algorithm for nondestru tive testing, IEEE Trans. Antennas Propag., vol. 49, pp.1812-1820, 2001.[27 S. Caorsi, A. Massa, M. Pastorino, M. Raetto, and A. Randazzo, Dete tion of buriedinhomogeneous ellipti ylinders by a memeti algorithm, IEEE Trans. Antennas Propag.,vol. 51, pp. 2878-2884, 2003.[28 S. Caorsi, A. Massa, M. Pastorino, and A. Randazzo, Ele tromagneti dete tion of diele tri s atterers using phaseless syntheti and real data and the memeti algorithm, IEEE Trans.Geos i. Remote Sens., vol. 41, pp. 2745-2753, 2003.[29 S. Caorsi, A. Massa, M. Pastorino, and A. Rosani, Mi rowave medi al imaging: Potentialitiesand limitations of a sto hasti optimization te hnique, IEEE Trans. Mi rowave Theory Te h.,vol. 52, pp. 1909-1916, 2004.[30 S. Caorsi, A. Massa, M. Pastorino, and M. Donelli, Improved mi rowave imaging pro edure fornondestru tive evaluations of two-dimensional stru tures, IEEE Trans. Antennas Propag., vol.52, pp. 1386-1397, 2004.[31 S. Caorsi, M. Donelli, A. Lommi, and A. Massa, Lo ation and imaging of two-dimensionals atterers by using a parti le swarm algorithm, J. Ele tromagn. Waves Appl., vol. 18, pp.481-494, 2004.[32 S. Caorsi, A. Massa, M. Pastorino, M. Raetto, and A. Randazzo, Mi rowave imaging of ylindri al inhomogeneities based on an analyti al forward solver and multiple illuminations,in Pro . 2004 IEEE Int. Workshop on Imaging Systems and Te hniques, Stresa, Italy, 2004,pp. 100-105.

Evolutionary Optimization as Applied to Inverse S attering Problems 47[33 U. K. Chakraborty, Advan es in Dierential Evolution. Berlin Heidelberg: Springer, 2008.[34 X. Chen, K. O'Neill, B. E. Barrowes, T. M. Grzegor zyk, and J. A. Kong, Appli ation of aspheroidal-mode approa h and a dierential evolution algorithm for inversion of magneto-quasistati data in UXO dis rimination, Inverse Problems, vol. 20, pp. 27-40, 2004.[35 W. C. Chew, Waves and Fields in Inhomogeneous Media. New York, NY: IEEE Press, 1995.[36 W. C. Chew, J.-M. Jin, E. Mi hielssen, and J. Song, Fast and E ient Algorithms inComputational Ele tromagneti s. Arte h House, Boston, 2001.[37 W. Chien and C.-C. Chiu, Using NU-SSGA to redu e the sear hing time in inverse problem ofa buried metalli obje t, IEEE Trans. Antennas Propag., vol. 53, pp. 3128-3134, 2003.[38 W. Chien and C.-C. Chiu, Using NU-SSGA to redu e the sear hing time in inverse problem ofa buried metalli obje t, IEEE Trans. Antennas Propag., vol. 53, pp. 3128-3134, 2005.[39 C.-C. Chiu and P.-T. Liu, Image re onstru tion of a perfe tly ondu ting ylinder by the geneti algorithm, IEE Pro . Mi row. Antennas Propag., vol. 143, pp. 249-253, 1996.[40 C. C. Chiu and P. T. Liu, Ele tromagneti transverse ele tri -wave inverse s attering of a ondu tor by the geneti algorithm, Int. J. Imaging Syst. Te hnol., vol. 9, pp. 388-394, 1998.[41 C.-C. Chiu and W.-T. Chen, Ele tromagneti imaging for an imperfe tly ondu ting ylinder bythe geneti algorithm, IEEE Trans. Mi rowave Theory Te h., vol. 48, pp. 1901-1905, 2000.[42 M. Cler and J. Kennedy, The parti le swarm - Explosion, stability, and onvergen e in amultidimensional omplex spa e, IEEE Trans. Evol. Comput., vol. 6, pp. 58-73, 2002.[43 C. Coello, D. A. Van Veldhuizen, G. B. Lamont, Evolutionary Algorithms for Solving Multi-Obje tive Problems (2nd Ed.). New York: Springer, 2007.[44 A. Collaro, G. Fran es hetti, F. Palmieri, and M. S. Ferreiro, Phase unwrapping by means ofgeneti algorithms, Journ. Opt. So . Ameri a A, vol. 15, pp. 407-418, 1998.[45 D. Costa and A. Hertz, Ants an olour graphs, J. Operat. Res. So ., vol. 48, pp. 295-305, 1997.[46 R. Dawkins, The Selsh Gene. New York: Oxford Univ. Press, 1976.[47 K. A. De Jong, An analysis of the behavior of a lass of geneti adaptive systems, Ph. D.Dissertation, Univ. Mi higan, 1975.[48 K. A. De Jong and W. M. Spears, A formal analysis of the role of multi-point rossover in geneti algorithms, Annals of Mathemati s and Arti ial Intelligen e, vol. 5, pp. 1-26, 1992.[49 M. Donelli and A. Massa, Computational approa h based on a parti le swarm optimizer formi rowave imaging of two-dimensional diele tri s atterers, IEEE Trans. Mi row. TheoryTe h., vol. 53, pp. 1761-1776, 2005.[50 M. Donelli, G. Fran es hini, A. Martini, and A. Massa, An integrated multis aling strategy basedon a parti le swarm algorithm for inverse s attering problems, IEEE Trans. Geos i. RemoteSens., vol. 44, pp. 298-312, 2006.[51 M. Donelli, D. Fran es hini, P. Ro a, and A. Massa, Three-dimensional mi rowave imagingproblems solved through an e ient multis aling parti le swarm optimization, IEEE Trans.Geos i. Remote Sens., vol. 47, pp. 1467-1481, 2009.[52 M. Dorigo, V. Maniezzo, and A. Colorni, Ant system: optimization by a olony of ooperatingagents, IEEE Trans. Syst. Man and Cybern. B , vol. 26, pp. 29-41, 1996.[53 M. Dorigo and L. M. Gambarella, Ant olony system: A ooperative learning approa h to the

48 A. Massa et al.traveling salesman problem, IEEE Trans. Evol. Comput., vol. 1, pp. 53-66, 1997.[54 M. Dorigo, T. Stutzle, Ant Colony Optimization. Cambridge, MA: MIT Press, 2004.[55 M. Dorigo and C. Blum, Ant olony optimization theory: A survey, Theoreti al ComputerS ien e, vol. 344, pp. 243-278, 2005.[56 R. C. Eberhart and Y. Shi, Comparing inertia weights and onstri tion fa tors in parti le swarmoptimization, in Pro . Congr. Evolutionary Computation, Pis ataway, NJ, 2000, pp. 84-88.[57 A. E. Eiben, E. H. L. Aarts, and K. M. Van Hee, Global onvergen e of geneti algorithms:A Markov hain analysis, in Parallel Problem Solving from Nature, H.-P. S hwefel and R.Manner, Eds. Berlin Heidelberg: Springer, 1991, pp. 4-12.[58 A. I. El-Gallad, M. El-Hawary, A. A. Sallam, and A. I Kalas, Swarm intelligen e for hybrid ostdispat h problem, in Pro . Can. Ele tri al and Computer Engineering Conf., vol. 2, 2001, pp.753-757.[59 Y. Ermoliev and R. J. B. Wets, Numeri al Te hniques for Sto hasti Optimization. New York:Springer, 1988.[60 H.-Y. Fan and J. Lampinen, A trigonometri mutation operation to dierential evolution, J.Global Optimization, vol. 27, pp. 105-129, 2003.[61 S. Forrest, Geneti algorithms: Prin iples of natural sele tion applied to omputation, S i., vol.261, pp. 872-878, 1993.[62 G. Fran es hini, M. Donelli, R. Azaro, and A. Massa, Inversion of phaseless total eld data usinga two-step strategy based on the iterative multis aling approa h, IEEE Trans. Geos i. RemoteSens., vol. 44, pp. 3527-3539, 2006.[63 R. Gamperle, S. D. Muller, and P. Koumoutsakos, A parameter study for dierential evolution,in Advan es in Intelligent Systems, Fuzzy Systems, Evolutionary Computation, A. Grmela andN. Mastorakis, Eds. WSEAS Press, 2002, pp. 293-298.[64 S. Garnier, J. Gautrais, and G. Theraulaz, The biologi al prin iples of swarm intelligen e,Swarm Intelligen e, vol. 1, pp. 3-31, 2007.[65 D. B. Ge, Y. B. Yan, S. Y. Shi, and X. C. Nie, Geneti algorithm applied to inverse s attering:Two examples, in Pro . Asia Pa i Mi rowave Conf., pp. 689-692, 1997.[66 S. Genovesi, E. Salerno, A. Monor hio, and G. Manara, A permittivity range prolere onstru tion te hnique for multilayered stru tures by a line pro ess-augmented geneti algorithm, in Pro . 2005 IEEE AP-S Int. Symp., Washington DC, USA, 20-25 Jun. 2005,pp. 216-219.[67 D. E. Goldberg and P. Segrest, Finite Markov hain analysis of geneti algorithms, in Pro . 2ndInt. Conf. Geneti Algorithms, pp. 1-8, 1987.[68 D. E. Goldberg, Simple geneti algorithms and the minimal de eptive problem, in: L. Davis(Ed.), Geneti Algorithms and Simulated Annealing, Pitman, London, UK, 1987, pp. 74-88.[69 D. E. Goldberg, Geneti Algorithms in Sear h, Optimization, and Ma hine Learning. Boston,MA: Addison-Wesley, 1989.[70 S. Goonatilake and P. Treleaven, Intelligent Systems for Finan e and Business. Chi hester: Wiley,1995.[71 P. P. Grassé, La re onstru tion du nid et les oordinations inter-individuelles hez Belli ositermes

Evolutionary Optimization as Applied to Inverse S attering Problems 49Natalensis et Cubitermes sp. La théorie de la stigmergie : essai dâinterprétation du omportement des termites onstru teurs, Inse tes So iaux, vol. 6, pp. 41-81, 1959.[72 W. J. Gutjahr, A graph-based ant system and its onvergen e, Future Generat. Comput. Syst.,vol. 16, pp. 873-888, 2000.[73 W. J. Gutjahr, ACO algorithms with guaranteed onvergen e to the optimal solution, Informat.Pro ess. Lett., vol. 82, pp. 145-153, 2002.[74 P. G. Harrald, Evolutionary algorithms and e onomi models: A view, in Pro . 5th Annu. Conf.on Evolutionary Programming, Cambridge, MA: MIT Press, 1996, pp. 3-7.[75 H. Holland, Adaptation in Natural and Arti ial Systems. Ann Arbor, MI: Univ. Mi higan Press,1975.[76 R. L. Haupt, An introdu tion to geneti algorithms for ele tromagneti s, IEEE AntennasPropag.. Mag., vol. 37, pp. 7-15, 1995.[77 J. H. Holland, Geneti algorithms, S i. Amer., vol. 267, pp. 66-72, 1992.[78 J. Horn, Finite Markov hain analysis of geneti algorithms with ni hing, in Pro . 5th Int. Conf.Geneti Algorithms, pp. 110-117, 1993.[79 T. Huang and A. S. Mohan, Mi rowave imaging of perfe t ele tri ally ondu ting ylinder bymi ro-geneti algorithm, in Pro . 2004 IEEE AP-S Int. Symp.,Monterey, CA, USA , 2004, pp.221-224.[80 T. Huang and A. S. Mohan, Appli ation of parti le swarm optimization for mi rowave imagingof lossy diele tri obje ts, in Pro . 2005 IEEE AP-S Int. Symp., Washington DC, USA, 20-25June, 2005, pp. 852-855.[81 T. Huang and A. S. Mohan, "A hybrid boundary ondition for robust parti le swarmoptimization," IEEE Antennas Wireless Propagat. Lett., vol. 4, pp. 112-117, 2005.[82 T. Huang and A. S. Mohan, Signi an e of neighborhood topologies for the re onstru tionof mi rowave images using parti le swarm optimization, in Pro . APMC-2005 Asia-Pa i Mi rowave Conferen e, 4-7 De . 2005, Suzhou, China, vol. 1.[83 T. Huang and A. S. Mohan, A mi roparti le swarm optimizer for the re onstru tion of mi rowaveimages, IEEE Trans. Antennas Propag., vol. 55, pp. 568-576, 2007.[84 C. Kehyayan, N. Mansour, and H. Kha hfe, Evolutionary algorithm for protein stru turepredi tion, in Pro . ICACTE '08 Int. Conf. Advan ed Computer Theory and Engineering,25-27 Sept. 2008, Cairo, Egypt, pp. 925-929.[85 J. Kennedy and R. Eberhart, Parti le swarm optimization, in Pro . 1995 IEEE Intern. Conf.Neural Networks, Pis ataway, NJ (USA), 1995, pp. 1942-1948.[86 J. Kennedy and R. C. Eberhart, A dis rete binary version of the parti le swarm algorithm, inPro . World Multi onf. Syst., Cyber. Informati s, Pis ataway, NJ (USA), 1997, pp. 4104-4109.[87 J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligen e. San Fran is o, CA: MorganKaufmann, 2001.[88 J. R. Koza, Geneti Programming: On the Programming of Computers by Means of NaturalSele tion. Cambridge, MA: The MIT Press, 1992.[89 C. Z. Janikow and Z. Mi halewi z, An experimental omparison of binary and oating pointrepresentations in geneti algorithms, in Pro . 4th Int. Conf. Geneti Algorithms, San Diego,

50 A. Massa et al.CA, USA, Jul. 1991, pp. 31-36.[90 J. M. Johnson and Y. Ramat-Samii, Geneti algorithms in engenering ele tromagneti s, IEEEAntennas Propag. Mag., vol. 39, pp. 7-21, 1997.[91 R. L. Johnston, Appli ations of Evolutionary Computation in Chemistry. Berlin Heidelberg:Springer, 2004.[92 A. Lewandowski and V. L. Volkovi h, Multiobje tive Problems of Mathemati al Programming.Eds. Spinger 1991, Pro . Int. Conf. Multiobje tive Problems of Mathemati al Programming,Yalta, USSR, O t. 26-Nov. 2, 1988.[93 F. Li, X. Chen, and K. Huang, Mi rowave imaging a buried obje t by the GA and using the s11parameter, Progress In Ele tromagneti s Resear h, vol. 85, pp. 289-302, 2008.[94 J. Liu and J. Lampinen, On setting the ontrol parameter of the dierential evolution method,in Pro . 8th Int. Conf. Soft Computing (MENDEL 2002), Brno, Cze h Republi , 2002, pp.11-18.[95 V. Maniezzo, Exa t and approximate nondeterministi tree-sear h pro edures for the quadrati assignment problem, INFORMS J. Comput., vol. 11, pp. 358-369, 1999.[96 R. Marklein, K. Balasubramanian, A. Qing, and K. J. Langenberg, Linear and nonlinear iteratives alar inversion of multi-frequen y multi-bistati experimental ele tromagneti s attering data,Inverse Problems, vol. 17, pp. 1597-1610, 2001.[97 A. Massa, M. Pastorino, and A. Randazzo, Re onstru tion of two-dimensional buried obje ts bya dierential evolution method, Inverse Problems, vol. 20, pp. 135-150, 2004.[98 A. Massa, D. Fran es hini, G. Fran es hini, M. Pastorino, M. Raetto, and M. Donelli, ParallelGA-based approa h for mi rowave imaging appli ations, IEEE Trans. Antennas Propag., vol.53, pp. 3118-3127, 2005.[99 R. Mendes, J. Kennedy, and J. Neves, Wat h thy neighbor or how the swarm an learn from itsenvironment, in Pro . IEEE Swarm Intell. Symp., pp. 88-94, 2003.[100 Z. Q. Meng, T. Takenaka, and T. Tanaka, Image re onstru tion of two-dimensional impenetrableobje ts using geneti algorithm, J. Ele tromag. Waves Appl., vol. 13, pp. 95-118, 1999.[101 Z.-Q. Meng, T. Takenaka, and S. He, "A geneti algorithm with an adaptive hromosome stru turefor re onstru tion of radome parameters using a Gaussian beam," Mi row. Opt. Te hnol. Lett.,vol. 25, pp. 323-327, 2000.[102 D. Merkle and M. Middendorf, Ant olony optimization with global pheromone evaluation fors heduling a single ma hine, Appl. Intell., vol. 18, pp. 105-111, 2003.[103 P. Merz and B. Freisleben, Fitness lands ape analysis and memeti algorithms for the quadrati assignment problem, IEEE Trans. Evol. Comput., vol. 4, pp. 337-352, 2000.[104 N. Meuleau and M. Dorigo, Ant olony optimization and sto hasti gradient des ent, Artif.Life, vol. 8, pp. 103-121, 2002.[105 K. A. Mi halski, Ele tromagneti imaging of ir ular- ylindri al ondu tors and tunnels using adierential evolution algorithm, Mi row. Opt. Te hnol. Lett., vol. 27, pp. 330-334, 2000.[106 K. A. Mi halski, Ele tromagneti imaging of ellipti al- ylindri al ondu tors and tunnels usinga dierential evolution algorithm, Mi row. Opt. Te hnol. Lett., vol. 28, pp. 164-169, 2001.[107 S. M. Mikki and A. A. Kishk, Physi al theory for parti le swarm optimization, Progr.

Evolutionary Optimization as Applied to Inverse S attering Problems 51Ele tromagn. Wave Res., vol. 75, pp. 171-207, 2007.[108 A. F. Morabito, I. Catapano, M. D'Urso, L. Cro o, and T. Isernia, A stepwise approa h forquantitative 3D imaging: Rationale and experimental results, in Pro . 2009 IEEE AP-S Int.Symp., Charleston, SC, 2009, 112.1.[109 P. Mos ato, On evolution, sear h, optimization, GAs and martial arts: Optimization towardsmemeti algorithms, California Inst. Te hnol., Pasadena, CA, Te h. Rep. Calte h Con urrentComput. Prog. Rep. 826, 1989.[110 T. Niwa and M. Tanaka, On the mean onvergen e time for simple geneti algorithms, in Pro .1995 IEEE Int. Conf. Evolutionary Computation, vol. 1, pp. 373-377.[111 J. No edal and S. J. Wright, Numeri al Optimization. New York: Springer, 1999.[112 E. Oz an and C. K. Mohan, Parti le swarm optimization: Surng the waves, in Pro . 1999Congr. Evolutionary Computation, Washington, DC, July 1999, pp. 1939-1944.[113 V. Pareto, Cours d'E onomie Politique. Lousanne: F. Rouge, 1986.[114 C. S. Park and B. S. Jeong, Re onstru tion of a high ontrast and large obje t by using thehybrid algorithm ombining a Levenberg-Marquardt algorithm and a geneti algorithm, IEEETrans. Mag., vol. 35, pp. 1582-1585, 1999.[115 M. Pastorino, A. Massa, and S. Caorsi, A mi rowave inverse s attering te hnique for imagere onstru tion based on a geneti algorithm, IEEE Trans. Instrum. Meas., vol. 49, pp. 573-578, 2000.[116 M. Pastorino, S. Caorsi, and A. Massa, A global optimization te hnique for mi rowavenondestru tive evaluation, IEEE Trans. Instrum. Meas., vol. 51, pp. 666-673, 2002.[117 M. Pastorino, S. Caorsi, A. Massa, and A. Randazzo, Re onstru tion algorithms forele tromagneti imaging, IEEE Trans. Instrum. Meas., vol. 53, pp. 692-699, 2004.[118 M. Pastorino, Sto hasti optimization methods applied to mi rowave imaging: A review, IEEETrans. Antennas Propag., vol. 55, pp. 538-548, 2007.[119 E. Polak, Computational Methods in Optimization. New York: A ademi Press, 1971.[120 R. Poli, J. Kennedy, and T. Bla kwell, Parti le swarm optimization - An overview, SwarmIntelligen e, vol. 1, pp. 33-57, 2007.[121 K. V. Pri e, R. M. Storn, and J. A. Lampinen, Dierential Evolution: A Pra ti al Approa h toGlobal Optimization. Berlin Heidelberg: Springer, 2005.[122 X. Qi and F. Palmieri, Theoreti al analysis of evolutionary algorithms with an innite populationsize in ontinuous spa e - Part I: Basi properties of sele tion and mutation, IEEE Trans.Neural Networks, vol. 5, pp. 102-119, 1994.[123 X. Qi and F. Palmieri, Theoreti al analysis of evolutionary algorithms with an innite populationsize in ontinuous spa e - Part II: Analysis of the diversi ation role of rossover, IEEE Trans.Neural Networks, vol. 5, pp. 120-129, 1994.[124 A. Qing, C. K. Lee, and L. Jen, Mi rowave imaging of parallel perfe tly ondu ting ylindersusing real- oded geneti algorithm, J. Ele tromag. Waves Appl., vol. 13, pp. 1121-1143, 1999.[125 A. Qing and C. K. Lee, Mi rowave imaging of a perfe tly ondu ting ylinder using a real- odedgeneti algorithm, in Pro . IEE Mi rowave Antennas Propag., vol. 146, pp. 421-425, 1999.[126 A. Qing and C. K. Lee, Mi rowave imaging of parallel perfe tly ondu ting ylinders using real-

52 A. Massa et al. oded geneti algorithm oupled with Newton-Kantorovit h method, J. Ele tromagn. WavesAppl., vol. 14, pp. 799-800, 2000.[127 A. Qing, C. K. Lee, and L. Jen, Ele tromagneti inverse s attering of two-dimensional perfe tly ondu ting obje ts by real- oded geneti algorithm, IEEE Trans. Geos i. Remote Sens., vol.39, pp. 665-676, 2001.[128 A. Qing, An experimental study on ele tromagneti inverse s attering of a perfe tly ondu ting ylinder by using the real- oded geneti algorithm, Mi row. Opt. Te hnol. Lett., vol. 30, pp.315-320, 2001.[129 A. Qing, Ele tromagneti imaging of two-dimensional perfe tly ondu ting ylinders withtransverse ele tri s attered eld, IEEE Trans. Antennas Propag., vol. 50, pp. 1786-1794,2002.[130 A. Qing, Ele tromagneti inverse s attering of multiple two-dimensional perfe tly ondu tingobje ts by the dierential evolution strategy, IEEE Trans. Antennas Propag., vol. 51, pp.1251-1262, 2003.[131 A. Qing, Ele tromagneti inverse s attering of multiple perfe tly ondu ting ylinders bydierential evolution strategy with individuals in groups (GDES), IEEE Trans. Antennas.Propag., vol. 52, pp. 1223-1229, 2004.[132 A. Qing, Dynami dierential evolution strategy and appli ations in ele tromagneti inverses attering problems, IEEE Trans. Geos i. Remote Sens., vol. 44, pp. 116-125, 2006.[133 A. Qing, A parametri study on dierential evolution based on ben hmark ele tromagneti inverse s attering problem, in Pro . 2007 IEEE Congress Evol. Comput. (CEC 2007), Tokyo,Japan, pp. 1904-1909.[134 I. T. Rekanos, Shape re onstru tion of a perfe tly ondu ting s atterer using dierential evolutionand parti le swarm optimization, IEEE Trans. Geos i. Remote Sens., vol. 46, pp. 1967-1974,2008.[135 M. Reimann, K. Doerner, and R. F. Hartl, D-ants: Savings based ants divide and onquer thevehi le routing problem, Computers & Operations Resear h, vol. 31, no. 4, pp. 563-591, 2004.[136 M. Reimann, Progress in Complex Systems Optimization. Berlin Heidelberg: Springer, 2007.[137 J. H. Ri hmond, S attering by a diele tri ylinder of arbitrary ross-se tion shape, IEEE Trans.Antennas Propag., vol. 13, pp. 334-341, 1965.[138 J. Robinson and Y. Rahmat-Samii, Parti le swarm optimization in ele tromagneti s, IEEETrans. Antennas Propag., vol. 52, pp. 397-407, 2004.[139 G. Rudolph, Convergen e analysis of anoni al geneti algorithms, IEEE Trans. NeuralNetworks, vol. 5, pp. 96-101, 1994.[140 A. Sabouni, M. Xu, S. Noghanian, P. Thulasiraman, and S. Pistorius, E ient mi rowave breastimaging te hnique using parallel nite dieren e time domain and parallel geneti algorithms,in Pro . 2007 IEEE AP-S Int. Symp., Albuquerque, NM, 2007, pp. 2176-2179.[141 A. Saeedfar and K. Barkeshli, Shape re onstru tion of three-dimensional ondu ting urvedplates using physi al opti s, NURBS modeling, and geneti algorithm, IEEE Trans. AntennasPropag., vol. 54, pp. 2497-2507, 2006.[142 J. D. S haer, Multiple obje tive optimization with ve tor evaluated geneti algorithms, Ph.D.

Evolutionary Optimization as Applied to Inverse S attering Problems 53dissertation, Vanderbilt Univ., Nashville, TN, 1984.[143 J . D. S haer, Multiple obje tive optimization with ve tor evaluated geneti al gorithms, inPro . 1st Int. Conf. Geneti Algorithms, Hillsdale, NJ, Lawren e Erlbaum, 1985, pp. 93-100.[144 J. J. S hneider and S. Kirkpatri k, Sto hasti Optimization. Berlin Heidelberg: Springer, 2006.[145 A. Semnani and M. Kamyab, An enhan ed method for inverse s attering problems using Fourierseries expansion in onjun tion with FDTD and PSO, Progress In Ele tromagneti s Resear h,vol. 76, pp. 45-64, 2007.[146 Y. Shi and R. C. Eberhart, A modied parti le swarm optimizer, in Pro . 1998 IEEE Inter.Conf. Evol. Comput., IEEE Press, Pis ataway, NJ, pp. 69-73.[147 Y. Shi and R. C. Eberhart, Empiri al study of parti le swarm optimization, in Pro . Congr.Evolutionary Computation, DC, 1999, pp. 1945-1950.[148 K. So ha, ACO for ontinuous and mixed-variable optimization, in Pro . 4th Int. Workshop AntColony Optim. Swarm Intell. (ANTS'2004), Brussels, Belgium, Sep. 5-8, 2004.[149 K. So ha and C. Blum, An ant olony optimization algorithm for ontinuous optimization:appli ation to feed-forward neural network training, Neural Comput. & Appli ., vol. 16, pp.235-247, 2007.[150 R. Storn and K. Pri e, Dierential evolution - A simple and e ient adaptive s heme for globaloptimization over ontinuous spa es, International Comput. S ien e Inst., Berkeley, CA, Te h.Rep. TR-95-012, 1995[151 R. Storn and K. Pri e, Dierential evolution - A simple and e ient heuristi for globaloptimization over ontinuous spa es, J. Global Optim., vol. 11, pp. 341-359, 1997.[152 T. Stutzle and M. Dorigo, A short onvergen e proof for a lass of ant olony optimizationalgorithms, IEEE Trans. Evol. Comput., vol. 6, pp. 358-365, 2002.[153 J. Suzuki, A Markov hain analysis on a geneti algorithm, in Pro . 5th Inter. Conf. Geneti Algorithms, University of Illinois at Urbana Champaign, pp. 146-153, 1993.[154 T. Takenaka, Z. Q. Meng, T. Tanaka, and W. C. Chew, Lo al shape fun tion ombined withgeneti algorithm applied to inverse s attering for strips, Mi row. Opt. Te hnol. Lett., vol. 16,pp. 337-341, 1997.[155 G. Theraulaz and E. Bonabeau, A brief history of stigmergy, Arti ial Life, vol. 5, pp. 97-116,1999.[156 X. L. Travassos, D. A. G. Vieira, N. Ida, C. Vollaire, and A. Ni olas, Inverse algorithms for theGPR assessment of on rete stru tures, IEEE Trans. Magn., vol. 44, pp. 994-997, 2008.[157 I. C. Trelea, The parti le swarm optimization algorithm: onvergen e analysis and parametersele tion, Information Pro essing Letters, vol. 85, pp. 317-325, 2003.[158 G. N. Vanderplaats, Numeri al optimization te hniques for engineering design: With appli ations.New York: M Graw-Hill, 1984.[159 M. D. Vose and G. E. Liepins, Pun tuated equilibria in geneti sear h, Complex Systems, vol.5, pp. 31-44, 1991.[160 W. Wang, Y. Lu, J. Fu, and Y. Z. Xiong, Parti le swarm optimization and nite-element basedapproa h for mi rowave lter design, IEEE Trans. Magneti s, vol. 41, pp. 1800-1803, 2005.[161 D. S. Weile and E. Mi hielssen, Geneti algorithm optimization applied to ele tromagneti s: A

54 A. Massa et al.review, IEEE Trans. Antennas Propag., vol. 45, pp. 343-353, 1997.[162 W. Weishui and X. Chen, Convergen e theorem of geneti algorithm, in Pro . 1996 IEEE Int.Conf. Systems, Man, and Cyberneti s, Beijing, China, pp. 1676-1681.[163 R. A. Wildman and D. S. Weile, Geometry re onstru tion of ondu ting ylinders using geneti programming, IEEE Trans. Antennas Propag., vol. 55, pp. 629-636, 2007.[164 R. A. Wildman and D. S. Weile, Greedy sear h and a hybrid lo al optimization/geneti algorithmfor tree-based inverse s attering, Mi row. Opt. Te hnol. Lett., vol. 50, pp. 822-825, 2008.[165 D. H. Wolpert and W. G. Ma ready, No free lun h theorems for optimization, IEEE Trans.Evol. Comput., vol. 1, pp. 67-82, 1997.[166 F. Xiao and H. Yabe, Mi rowave imaging of perfe tly ondu ting ylinders from real data bymi ro geneti algorithm ouple with deterministi method, IEICE Trans. Ele tron., vol. E81-C, pp. 1784-1792, 1998.[167 M. Xu, A. Sabouni, P. Thulasiraman, S. Noghanian, and S. Pistorius, Image re onstru tion usingmi rowave tomography for breast an er dete tion on distributed memory ma hine, in Pro .2007 Int. Conf. Parallel Pro essing (ICPP 2007), Xi'an, China, September 10-14, 2007, p. 36.[168 F. Xue, A. C. Sanderson, and R. J. Graves, Multi-obje tive dierential evolution - Algorithm, onvergen e analysis, and appli ations, in Pro . 2005 IEEE Congress on EvolutionaryComputation, 2-4 Sep. 2005, Edinburgh, UK, vol. 1, pp. 743-750.[169 S.-Y. Yang, H.-K. Choi, J.-W. Ra, Re onstru tion of a large and high- ontrast penetrable obje tby using the geneti and Levenberg-Marquardt algorithms, Mi row. Opt. Te h. Lett., vol. 16,pp. 17-21, 1997.[170 S. H. Zainud-Deen, W. M. Hassen, E. M. Ali, K. H. Awadalla and H. A. Sharshar, Breast an erdete tion using a hybrid nite dieren e frequen y domain and parti le swarm optimizationte hniques, Progress In Ele tromagneti s Resear h B, vol. 3, pp. 35-46, 2008[171 D. Zaharie, Criti al values for the ontrol parameters of dierential evolution algorithm, inPro . 8th Int. Conf. Soft Computing (MENDEL 2002), Brno, Cze h Republi , 2002, pp. 62-67.[172 B. Zhao and S. Li, A onvergen e proof for ant olony algorithm, in Pro . 6th World CongressIntelligent Control and Automation, June 21-23, Dalian, China, vol. 1, pp. 3072-3075, 2006.[173 Y. Zhou and H. Ling, Ele tromagneti inversion of Ipswi h obje ts with the use of the geneti algorithm, Mi row. Opt. Te hnol. Lett., vol. 33, pp. 457-459, 2002.[174 Y. Zhou, J. Li, and H. Ling, Shape inversion of metalli avities using hybrid geneti algorithm ombined with tabu list, Ele tron. Lett., vol. 39, pp. 280-281, 2003.[175 E. Zitzler, K. Deb, L. Thiele, C. Coello, and D. Corne, Evolutionary multi- riterionoptimization, in Pro . 1st Int. Conf. (EMO'01), Springer, Berlin, 2001, vol. 1993.

Evolutionary Optimization as Applied to Inverse S attering Problems 55FITNESS EVALUATION

FITNESS EVALUATION

INITIALIZATION

New Population?

NO

ELITISM

YESEND

NO

Reproduction Cycle

REPLACE POPULATION

YES

Convergence?

CROSSOVER:

MUTATION:

SELECTION

Φ(f (p)0

); p = 1, ..., P

Φ(f (p)k+1

); p = 1, ..., P

F0 = f (p)0

; p = 1, ..., Pk = 0

Fk+1 = f (p)k+1

; p = 1, ..., P

k = k + 1

CFk(S)

MFk(S)

pCR

pMT

Fk(S) = SFk

Figure 1. Geneti Algorithm - Flow hart.

56 A. Massa et al.

FITNESS EVALUATION

SELECTION FITNESS EVALUATION

INITIALIZATION

MUTATION

CROSSOVER Reproduction Cycle

NO YESENDConvergence?

Φ(f (p)0

); p = 1, ..., P

Fk+1 = f (p)k+1

; p = 1, ..., P Φ(f (p)k+1

); p = 1, ..., P

F0 = f (p)0

; p = 1, ..., Pk = 0

t(p)k+1; p = 1, ..., P

u(p)k+1; p = 1, ..., P

k = k + 1

Figure 2. Dierential Evolution - Flow hart.

Evolutionary Optimization as Applied to Inverse S attering Problems 57

FITNESS EVALUATION

FITNESS EVALUATION

Convergence? END

INITIALIZATION

VELOCITY UPDATESwarm Update

NO

YES

POSITION UPDATE

UPDATE COGNITIVE−SOCIAL TERMS

Φ(f (p)0

); p = 1, ..., P

Φ(f (p)k+1

); p = 1, ..., P

k = 0

F0 = f (p)0

; p = 1, ..., Pv

(p)0 ; p = 1, ..., P

v(p)k+1; p = 1, ..., P

p(p)k+1; g

(p)k+1

Fk+1 = f (p)k+1

; p = 1, ..., P

k = k + 1

Figure 3. Parti le Swarm Optimizer - Flow hart.

58 A. Massa et al.

FITNESS EVALUATION

FITNESS EVALUATION

Colony Update

NO YESENDConvergence?

INITIALIZATION

PHEROMONE DEPOSITION

PHEROMONE EVAPORATION

MOVE ANTS

Φ(f (p)k+1

); p = 1, ..., P

Φ(f (p)0

); p = 1, ..., P

k = 0

∀ij

Fk+1 = f (p)k+1

; p = 1, ..., P

ψijk+1 ∀ij

ψijk+1

ψij0

F0 = f (p)0

; p = 1, ..., P∀ψij

k = k + 1

Figure 4. Ant Colony Optimizer - Flow hart.

Evolutionary Optimization as Applied to Inverse S attering Problems 59

NO YESENDConvergence?

INITIALIZATION

ELITISM

Reproduction Cycle

FITNESS EVALUATIONLOCAL SEARCH

LOCAL SEARCH

GLOBAL SEARCH

FITNESS EVALUATION

k = 0

k = k + 1

F0 = f (p)

0; p = 1, ..., P

F0 = f (p)0

; p = 1, ..., P

Φ(f(p)

0); p = 1, ..., P

Fk+1 = f (p)k+1

; p = 1, ..., P

Fk+1 = f (p)

k+1; p = 1, ..., P Φ(f

(p)

k+1); p = 1, ..., P

Figure 5. Memeti Algorithm - Flow hart.

60A.Massaetal.

Algorithm Object Geometry N P

NKend T ime [sec] PC Reference

BGA PEC 2D 9 ∼ 33 − ∼ 1.8 × 103 Sun Sparc 20 [39BGA PEC 2D 36 ∼ 1.4 173 330 Sun Sparc 20 [100BGA PEC∗ 2D 400 0.25 − − − [173BGA PEC 3D 6 − 30 ∼ 6.7 × 104 Pentium 4 [141RGA PEC 2D 8 32 35 ∼ 1.6 × 104 IBM P − 133 [127RGA PEC∗ 2D 6 50 40 ∼ 8.6 × 104 MIPS R10K [96µGA PEC∗ 2D 10 − − ∼ 1.7 × 104 Sun Sparc 20 [166µGA PEC 2D 5 1 1000 ∼ 3.5 × 103 Pentium 4 [79

GA/CG PEC∗ 2D 5 40 220 − − [174

GA/CG + tabu PEC∗ 2D 5 40 75 − − [174

DE PEC 2D 5 5 40 − − [106

DE PEC∗ 2D 6 ∼ 7 − − HP OmniBook XE3 [130

DE PEC 2D 16 10 ∼ 160 ∼ 4 × 103 HP OmniBook XE3 [130

DE (GDES) PEC 2D 16 ∼ 18 23 ∼ 1.2 × 103 Pentium 4 [131

PSO PEC 2D 10 30 200 − − [134

BGA conductor 2D 10 30 − − − [41

RGA dielectric 2D 6 50 20 ∼ 8.6 × 104 MIPS R10K [96

DE dielectric 2D 5 5 30 − − [106

TableI.ComputationalindexesofEAsappliedtotheshapere onstru tionofmetalli

andhomogeneousdiele tri obje ts.Nisthenumberofunknownparameters,

Pisthenumberoftrialsolutionsforea hiteration,

Kend isthenumberofiterationat

onvergen e.Theasterisk∗meansinversionofexperimentaldata.

EvolutionaryOptimizationasAppliedtoInverseS atteringProblems61

Algorithm Object Geometry N P

NKend T ime [sec] PC Reference

BGA dielectric 2D 900 − 2000 270 Pentium II [23BGA dielectric 2D 400 0.15 8000 − − [25RGA dielectric 2D 810 0.2 10000 ∼ 104 Pentium [24

RGA dielectric 2D 500 − 8000 ∼ 1.2 × 103 − [50

GA/CG dielectric 2D 810 0.2 3400 ∼ 2 × 104 Pentium [24

PSO dielectric 2D 500 0.05 8000 ∼ 7.6 × 102 − [50

PSO† dielectric∗ 2D 2664 7.5 × 10−3 − 1.75 × Kend − [50

micro − PSO dielectric 2D 125 0.04 500 − − [83

PSO† dielectric 3D 3250 ∼ 0.04 5100 ∼ 2.1 × 104 − [51

TableII.ComputationalindexesofEAsappliedtothere onstru tionofthediele tri

distribution.Nisthenumberofunknownparameters,

Pisthenumberoftrialsolutionsforea hiteration,

Kend isthenumberofiterationat onvergen e.The

asterisk∗meansinversionofexperimentaldata.Thesymbol†indi atestheuseofa

multi-resolutionstrategy.