17
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008 203 Real-Valued Compact Genetic Algorithms for Embedded Microcontroller Optimization Ernesto Mininno, Francesco Cupertino, and David Naso Abstract—Recent research on compact genetic algorithms (cGAs) has proposed a number of evolutionary search methods with reduced memory requirements. In cGAs, the evolution of populations is emulated by processing a probability vector with specific update rules. This paper considers the implementation of cGAs in microcontroller-based control platforms. In particular, to overcome some problems related to the binary encoding schemes adopted in most cGAs, this paper also proposes a new variant based on a real-valued solution coding. The presented variant achieves final solutions of the same quality as those found by binary cGAs, with a significantly reduced computational cost. The potential of the proposed approach is assessed by means of an extensive comparative study, which includes numerical results on benchmark functions, simulated and experimental microcon- troller design problems. Index Terms—Compact genetic algorithms (cGAs), electric drives, embedded systems, online optimization. I. INTRODUCTION C OMPACT genetic algorithms (cGAs) [10] are evolu- tionary algorithms that mimic the behavior of conven- tional GAs by evolving a probability vector (PV) that describes the hypothetic distribution of a population of solutions in the search space. A cGA iteratively processes the PV with updating mechanisms that mimic the typical selection and recombination operations performed in a standard GA (sGA) until a stopping criterion is met. Reference [10] showed that the cGA is almost equivalent to a sGA with binary tournament selection and uniform crossover on a number of test problems, and also suggested some mechanisms to alter the selection pressure in the cGA. The main strength of the cGA is the significant reduc- tion of memory requirements, as it needs to store only the PV instead of an entire population of solutions. This feature makes them particularly suitable for memory-constrained applications such as evolvable-hardware [26] or complex combinatorial problems [2]. Since the cGA mimics the “order-one” behavior of sGAs, it may be ineffective for optimization problems with higher order building blocks. Recent research has investigated a number of ways to address this fundamental issue. More specifically, [2] applies the cGAs to the TSP problem, and proposes a variant that employs an efficient local search operator to enhance the speed of convergence and the quality of the solutions. Refer- ence [1] discusses the analogies between cGAs and -ES, Manuscript received May 16, 2006; revised October 10, 2006. The authors are with the Dipartimento di Elettrotecnica ed Elettronica, Politecnico di Bari, 200 70125 Bari, Italy (e-mail: [email protected]; [email protected]; [email protected]). Digital Object Identifier 10.1109/TEVC.2007.896689 and extends a mathematical model of ES [14] to cGAs obtaining useful analytical performance figures. Moreover, [1] introduces the concept of elitism, and proposes two new variants, with strong and weak elitism respectively, that significantly outper- form both cGAs and -ES. Reference [26] suggests various cGAs combining elitism, mutation (exploration of the neighbor- hood of the current individual), and resampling (periodic reeval- uation of the fitness of the best known individual in case of noisy or time-varying fitness). Reference [26] shows that some of the proposed variants are extremely effective for both static and dy- namic optimization problems, and can be implemented in com- monly available field programmable gate array (FPGA) boards. This paper shares the same fundamental motivations as the work in [26]. In particular, extending our previous work on hard- ware-in-the-loop optimization [4], here we investigate the possi- bility of using a cGA as the main optimization engine for solving online closed-loop control design problems. The typical task of a GA in a control engineering application is to find the best values for a predefined set of free parameters defining either a process model or a control law. Updated surveys of this research area can be found in [4] and [27]. Both surveys outline that the number of hardware applications is extremely limited for var- ious reasons. In such a context, the cGAs are particularly inter- esting not only for the mentioned reduction of memory (power and space [26]) requirements, but also because their general structure (the lack of a population, the absence of random selec- tion and recombination) lends itself well to online implementa- tions that overcome the typical problems related to the conven- tional population-after-population algorithmic scheme of sGA. In our application, the design problem can be viewed as the search for the optimal setting of a number of real-valued coef- ficients that specify the input–output difference equation of the controller. While in our previous research [4], the main search engine was a sGA executed on a host computer that continu- ously downloaded new controllers to the microprocessor run- ning the feedback control law, here we focus on a family of cGAs suitable to be directly executed on the floating-point mi- crocontroller without the need of a host computer. This poten- tial of cGAs appears particularly promising because embedded microcontroller-based platforms are increasingly widespread in various control and automation research areas, as mechatronics, automotive, and rapid prototyping systems. Implementing an effective cGA for the considered pur- pose is not trivial, because most microcontroller platforms are programmable with object-oriented software based on floating-point real-valued variables, whereas to date most literature about cGAs (including all the references mentioned above) concentrates on binary-encoding schemes. In a micro- controller implementation, the need to continuously perform 1089-778X/$25.00 © 2007 IEEE

Real-Valued Compact Genetic Algorithms For

  • Upload
    strrrwe

  • View
    223

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Real-Valued Compact Genetic Algorithms For

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008 203

Real-Valued Compact Genetic Algorithms forEmbedded Microcontroller Optimization

Ernesto Mininno, Francesco Cupertino, and David Naso

Abstract—Recent research on compact genetic algorithms(cGAs) has proposed a number of evolutionary search methodswith reduced memory requirements. In cGAs, the evolution ofpopulations is emulated by processing a probability vector withspecific update rules. This paper considers the implementation ofcGAs in microcontroller-based control platforms. In particular, toovercome some problems related to the binary encoding schemesadopted in most cGAs, this paper also proposes a new variantbased on a real-valued solution coding. The presented variantachieves final solutions of the same quality as those found bybinary cGAs, with a significantly reduced computational cost.The potential of the proposed approach is assessed by means ofan extensive comparative study, which includes numerical resultson benchmark functions, simulated and experimental microcon-troller design problems.

Index Terms—Compact genetic algorithms (cGAs), electricdrives, embedded systems, online optimization.

I. INTRODUCTION

COMPACT genetic algorithms (cGAs) [10] are evolu-tionary algorithms that mimic the behavior of conven-

tional GAs by evolving a probability vector (PV) that describesthe hypothetic distribution of a population of solutions in thesearch space. A cGA iteratively processes the PV with updatingmechanisms that mimic the typical selection and recombinationoperations performed in a standard GA (sGA) until a stoppingcriterion is met. Reference [10] showed that the cGA is almostequivalent to a sGA with binary tournament selection anduniform crossover on a number of test problems, and alsosuggested some mechanisms to alter the selection pressure inthe cGA. The main strength of the cGA is the significant reduc-tion of memory requirements, as it needs to store only the PVinstead of an entire population of solutions. This feature makesthem particularly suitable for memory-constrained applicationssuch as evolvable-hardware [26] or complex combinatorialproblems [2].

Since the cGA mimics the “order-one” behavior of sGAs, itmay be ineffective for optimization problems with higher orderbuilding blocks. Recent research has investigated a number ofways to address this fundamental issue. More specifically, [2]applies the cGAs to the TSP problem, and proposes a variantthat employs an efficient local search operator to enhance thespeed of convergence and the quality of the solutions. Refer-ence [1] discusses the analogies between cGAs and -ES,

Manuscript received May 16, 2006; revised October 10, 2006.The authors are with the Dipartimento di Elettrotecnica ed Elettronica,

Politecnico di Bari, 200 70125 Bari, Italy (e-mail: [email protected];[email protected]; [email protected]).

Digital Object Identifier 10.1109/TEVC.2007.896689

and extends a mathematical model of ES [14] to cGAs obtaininguseful analytical performance figures. Moreover, [1] introducesthe concept of elitism, and proposes two new variants, withstrong and weak elitism respectively, that significantly outper-form both cGAs and -ES. Reference [26] suggests variouscGAs combining elitism, mutation (exploration of the neighbor-hood of the current individual), and resampling (periodic reeval-uation of the fitness of the best known individual in case of noisyor time-varying fitness). Reference [26] shows that some of theproposed variants are extremely effective for both static and dy-namic optimization problems, and can be implemented in com-monly available field programmable gate array (FPGA) boards.

This paper shares the same fundamental motivations as thework in [26]. In particular, extending our previous work on hard-ware-in-the-loop optimization [4], here we investigate the possi-bility of using a cGA as the main optimization engine for solvingonline closed-loop control design problems. The typical task ofa GA in a control engineering application is to find the bestvalues for a predefined set of free parameters defining either aprocess model or a control law. Updated surveys of this researcharea can be found in [4] and [27]. Both surveys outline that thenumber of hardware applications is extremely limited for var-ious reasons. In such a context, the cGAs are particularly inter-esting not only for the mentioned reduction of memory (powerand space [26]) requirements, but also because their generalstructure (the lack of a population, the absence of random selec-tion and recombination) lends itself well to online implementa-tions that overcome the typical problems related to the conven-tional population-after-population algorithmic scheme of sGA.

In our application, the design problem can be viewed as thesearch for the optimal setting of a number of real-valued coef-ficients that specify the input–output difference equation of thecontroller. While in our previous research [4], the main searchengine was a sGA executed on a host computer that continu-ously downloaded new controllers to the microprocessor run-ning the feedback control law, here we focus on a family ofcGAs suitable to be directly executed on the floating-point mi-crocontroller without the need of a host computer. This poten-tial of cGAs appears particularly promising because embeddedmicrocontroller-based platforms are increasingly widespread invarious control and automation research areas, as mechatronics,automotive, and rapid prototyping systems.

Implementing an effective cGA for the considered pur-pose is not trivial, because most microcontroller platformsare programmable with object-oriented software based onfloating-point real-valued variables, whereas to date mostliterature about cGAs (including all the references mentionedabove) concentrates on binary-encoding schemes. In a micro-controller implementation, the need to continuously perform

1089-778X/$25.00 © 2007 IEEE

Page 2: Real-Valued Compact Genetic Algorithms For

204 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008

Fig. 1. Normalization of two truncated Gaussian curves (with standard deviations � � � and � � ��, respectively) so as to obtain a probability density functionwith a nonzero support only in the normalized interval �������.

binary-to-float coding and decoding operations may weaken,to a variable extent, the advantages related to reduced memoryrequirements offered by cGAs. To overcome this problem, thispaper proposes a variant of a cGA that works directly withreal-valued chromosomes. More specifically, for an optimiza-tion problem with real-valued variables, the real-valued cGAuses as PV a 2 matrix describing the mean and the standarddeviation of the distribution of each gene in the hypotheticalpopulation. New variants of the update rules are then introducedto evolve the PV in a way that mimics the binary-coded cGA,which in turn emulates the behavior of a sGA.

The similarities between cGAs and evolutionary strategies(ES) have already been pointed out in recent literature [1].The analogies become even more evident in our case, asthe use of a probabilistic description of the population withreal-valued mean and standard deviation immediately recallsthe well-known scheme of the -ES (e.g., [7] and [12]).Furthermore, the real-valued cGA also has some importantrelationships with other recently proposed evolutionary algo-rithms that incorporate probabilistic tools, such as continuouspopulation-based incremental learning (PBIL) [21], [22],[29], or univariate marginal distribution algorithms (UMDAs)[23], [25], [28]. The differences and analogies with respect tothese algorithms will be discussed after the description of thereal-valued cGA in Section II.

The behavior of both binary and real-valued cGAs is assessedin three different ways. Section III considers a set of well-knownbenchmark functions and compares the two cGAs (with binaryand real-valued coding schemes) with their standard popula-tion-based versions (i.e., binary and real-valued sGAs). Testsalso include noise-corrupted fitness measurements, to better em-ulate the typical conditions in which microcontroller optimiza-tion is performed. Section IV focuses on the feedback control ofa simulated second order model of a DC motor drive. The aimof this case study is to provide an easily repeatable benchmarkfor comparative purposes. Section V describes the experimentalapplication of the cGAs to design and optimize online the dis-crete anti-windup PI speed controller for a vector controlled in-

duction motor (IM) drive subject to a time-varying load usinga torque-controlled brushless generator, mounted on the sameshaft. Conclusive remarks are finally drawn in Section VI.

II. COMPACT GAS WITH REAL-VALUED CODING

The cGA proposed in this paper combines the nonpersistentelitist compact GA proposed in [1] with a real-valued chromo-some representation. The real-valued coding has been widelyused in evolutionary algorithms (including GAs [12]) to avoidthe additional computational cost related to the binary-to-floatconversions. To use a real-valued coding in a cGA, a number ofmodifications are necessary. For brevity, we will hereafter referto the nonpersistent elitist binary-coded cGA as bcGA, and tothe real-valued variant as rcGA. In order to avoid repetitive de-scriptions of known algorithms, we adopt here the same notationused in [1], and focus only on the rcGA.

Let us consider a minimization problem in a -dimensionalhyper-rectangle ( is the number of parameters). Without lossof generality, let us assume that parameters are normalized sothat each search interval is . The estimate of the single(binary) gene distribution in a cGA can be specified with a scalarin [0,1] expressing the probability of finding a “0” or a “1” inthe single gene. As the rcGA uses real-valued genes, the distri-bution of the single gene in the hypothetical population must bedescribed by a probability density function (PDF) defined on thenormalized interval . A variety of different PDFs couldbe used for this purpose. We assume that the distribution of theth gene can be described with a Gaussian PDF with mean

and standard deviation . More precisely, since the GaussianPDF is defined in , we adopt a “Gaussian-shaped”PDF defined on whose height is normalized so thatits area is equal to one. Fig. 1 illustrates the effects of the nor-malization procedure on a Gaussian PDF with zero-mean andstandard deviations equal to 1 and 10, respectively. The heightnormalization has a generally negligible computational cost, asit is obtained by means of a simple numerical procedure that iswidely adopted in pseudorandom number generation, and more

Page 3: Real-Valued Compact Genetic Algorithms For

MININNO et al.: REAL-VALUED COMPACT GENETIC ALGORITHMS FOR EMBEDDED MICROCONTROLLER OPTIMIZATION 205

Fig. 2. Real-valued variant of the nonpersistent elitist bcGA.

generally, in practical implementations of stochastic algorithms(see, e.g., [3] and [15, Appendix D2]).

Therefore, the PV becomes in our case a 2 matrix speci-fying the two parameters of the PDF of each single gene. Thus,we define

(1)

where is the vector of themean values, is the vectorof the standard deviations, and is the iteration index. Fig. 2describes the rcGA with the typical pseudocode used in relatedliterature. The main differences with the bcGA regard the gen-eration of the initial population and the update rule. The initialpopulation is generated by setting and , where

is a large positive constant (e.g., , see also Fig. 1). Inthis way, after height normalization we obtain a PDF that ap-proximates sufficiently well the uniform distributionthat is almost always used to generate initial individuals in real-valued GAs.

The update rule is designed so as to replicate the typical oper-ations of a bcGA. As in [10], the idea is to mimic a steady-statebinary tournament selection. This operator simply replaces thelooser with a copy of the winner, and consequently also altersthe compact, probabilistic description of the population. In par-ticular, while the mean is simply moved in the direction of the

winner with a step whose size depends on the population size ,the standard deviation is subject to a less transparent effect, dueto its quadratic structure. Fig. 2 reports the update rule for thestandard deviation in a compact form, particularly convenientfor algorithm implementation, which can be directly obtainedwith simple manipulations, as described in Appendix A.

As mentioned, this rcGA shares a number of similarities withother types of evolutionary algorithms. All cGAs belong to alarger class of methods known as estimation of distribution al-gorithms (EDAs), which have been extensively analyzed in re-cent literature [18]. In particular, among the EDAs that modelthe hypothetical population with PDFs defined on continuousintervals, the algorithms known as continuous-PBIL ( )and continuous-UMDA ( ) share some common aspectswith our rcGA. The algorithms (see [20] and [21] and theincluded references) are extensions of a population-based algo-rithm originally devised for binary search spaces. One specificversion of uses a set of Gaussian distributions to modelthe vector of problem’s variables [19]. During the iterations,this PBIL algorithm only changes the mean values, while thestandard deviations are constant parameters that must be fixeda priori. At each iteration, the Gaussian- : 1) randomlyextracts a population of solutions using the current distributionfor each variable; 2) sorts the solutions according to their fitness;and then 3) updates the means of the distributions with a simple

Page 4: Real-Valued Compact Genetic Algorithms For

206 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008

TABLE ISUMMARY OF CONFIGURATION PARAMETERS OF THE FOUR GAS

linear law (a combination of the best, the second best and theworst individual). The rcGA in Fig. 2 could be viewed as a par-ticular form of Gaussian with , improved by non-persistent elitism, and an adaptation law for the standard devia-tions. It is also interesting to note that other recent research aboutextensions of confirms the advantages of standard devi-ation adaptation ([22] obtains promising results with a mecha-nism similar to the well-known “1/5 success rule”). Moreover,the literature also suggests some other interesting possibilitiesdeserving further investigations (e.g., replacing Gaussian PDFwith histograms, or adapting the coefficient of the update rulefor the mean vector [21]), although the extension or adapta-tion of these ideas to our online microcontroller optimizationproblem is not immediate, especially considering the signifi-cant limitations on the available computational time. Among themany EDAs available in literature, the [23], [25] alsouses Gaussian PDFs as probabilistic models of population distri-bution. There are many variants of this algorithm. Theproposed in [23] is among the most widely citd ones: it extracts

couples of individuals using the current PDFs, performstournaments, and uses the winners to estimate the parame-ters of the new PDFs. When setting , the general structureof may look quite similar to the rcGA in Fig. 2, but itshould be noted that: 1) the PDF update laws of the arequite different from those in Fig. 2 and 2) the updatelaws explicitly need a population size greater than one to workproperly. The main strengths of UMDAs lie in their amenabilityto theoretical analyses, and in their capability to learn the inter-dependencies among the problem variables (by adopting morecomplex multivariate PDFs). On the other hand, it should beevident that implementing such a population-basedin our real-time environment could be even more prohibitivethan in the case of a sGA, due to the additional costs relatedto the more complex update laws for multivariate PDFs (see re-marks in [24]). Finally, as already pointed out in [1], cGAs canbe viewed as a particular type of -ES. Since ESs useGaussian PDF to characterize the additive mutation, the simi-larities between our rcGA and the -ES are even moreapparent. The main difference between the two evolutionary al-gorithms can be seen in the meaning and actual use of parame-ters and . In -ES, the generic gene repre-sents the value of the th element of the best solution known atthe iteration , rather than the center of the probabilistic func-

tion modeling the distribution of the fictitious population. SinceES do not consider as affected by probabilistic uncertainty,using the same notation introduced in Fig. 2, the update law for

in a -ES is (i.e., the algorithm simplyreplaces the gene of the parent with the one of the best-knownsolution). On the contrary, it is evident in Fig. 2 that the rcGAmakes a much smoother move towards the winner, which alsodepends on the parameter . Similar analogies can be observedin the updates of the standard deviation . In ESs, the pa-

rameter indicates the standard deviation of the mutation,

i.e., the random additive perturbations applied to to gen-

erate a new solution. In ESs, is usually updated either withheuristic rules that take into account the amount of successfulperturbations obtained over a predefined number of mutations(e.g., the “1/5 success rule” [12]) or with self-adaptation (i.e.,it becomes a further gene of the chromosome evolved by theES). The novel update rule proposed in Fig. 2 may be properlyplaced in the first class of methods, although it still appears sig-nificantly different from the typical ES parameter update rules(to our best knowledge there has been no previous effort in de-veloping a mutation update rule in ESs inspired to binary cGAs).

III. NUMERICAL BENCHMARKS

Tuning and validation of the proposed rcGA are based ona comparison with three other standard algorithms selectedfrom literature. The set of reference standards includes theelitist bcGAs proposed in [1] and two population-based GAs.The following subsections illustrate the configurations of thereference algorithms, the suite of benchmark functions, and thesummary of numerical results, respectively.

A. Reference Algorithms for Comparison

Obtaining a significant comparison between GAs is alwaysdifficult, as the relative results are strongly influenced by thenontrivial configuration of the single algorithms. For this reason,the set of reference algorithms includes two GA codes availablein the Genetic Algorithm Direct Search Toolbox (Version 1.0.3)[17]. Essentially, the two standard algorithms differ for the en-coding scheme and for the crossover operators. The configura-tions are summarized in Table I.

One fundamental difference between the population-basedand the compact GAs lies in the sizes of the actual or ficti-

Page 5: Real-Valued Compact Genetic Algorithms For

MININNO et al.: REAL-VALUED COMPACT GENETIC ALGORITHMS FOR EMBEDDED MICROCONTROLLER OPTIMIZATION 207

tious populations used by the four algorithms. For all the GAs,this parameter is chosen so as to make all the GAs approxi-mately converge using the same number of fitness calls (whichis the stopping criterion for all our numerical and experimentalinvestigations). We have performed a number of preliminarytests, obtaining results that agree with indications available inrecent literature [1], [10]). In particular, the standard GAs re-quire smaller population sizes than the cGAs (about 2 to 4 timesin the case of nonpersistent elitist cGA, as also reported in [1]).Finally, the number of bits used for the parameter encoding inbinary GAs was chosen so as to have a sufficient resolution toconverge reasonably close to the known optima. Finally, the pa-rameters defining the elitist mechanism in the two cGAs werechosen empirically.

B. Benchmark Functions

The comparison is based on five well-known benchmarkfunctions selected from GA literature for their different charac-teristics. For each function, we consider two cases: in the firstone, fitness measurements are noise-free, while in the other onethey are subject to additive Gaussian noise with zero-mean andstandard deviations (such a noise amplitude is chosento reproduce the typical uncertainty affecting control perfor-mance measures). The number of free parameters for eachfunction is . The use of relatively small size problems ismotivated by the interest in evaluating the performances of thealgorithms in noisy search spaces having the same size of thecontrol design problems tackled on the microcontroller board.The five benchmark functions are defined as follows.

1) Rastrigin’s Function:

(2)This benchmark is highly multimodal due to the cosine modula-tion that produces regularly distributed local minima. The globaloptimum is the origin with [16].

2) Circle Function:

(3)

this is a multimodal function described in [13]. It has many localoptima (minima) that are located on concentric circles near theglobal optimum (the origin).

3) Schwefel’s Function:

(4)

This is a deceptive function in which the global minimum isgeometrically distant, over the parameter space, from the nextbest local minima. This may cause the search algorithm to movetoward the wrong direction. The global minimum is located at

, with .

4) Michalewicz Function:

(5)

This is a multimodal test function ( local optima). Theparameter defines the “steepness” of the valleys or edges.Larger leads to more difficult search. The function valuesfor points in the space outside the narrow valleys give verylittle information on the location of the global optimum. In ourtests, we use , which places the global minimum in

[12].5) Rosenbrock’s Function (Also Known as Dejong’s Second

Function):

(6)

This is a standard test function in optimization theory. Due to along narrow valley present in this function, gradient-based tech-niques may require a large number of iterations before the globalminimum , is found.

C. Summary of Results: Quality of Final Solutions

Each algorithm is run 1000 times to obtain average perfor-mances. The results are summarized in Figs. 3–7. The figuresdraw the histograms of the distribution of the best solutionsfound in each of the 1000 replications. In particular, to give animmediately interpretable index of performance, the distribu-tion of final solutions is viewed in terms of the normalized dis-tance (with respect to the actual width of the search space) fromthe true optimum.

The analysis of the single benchmarks highlights a number ofinteresting results. For the Rastragin function, all the algorithmsare relatively unable to escape the local minima. Especially forthe bGA and rcGA, the distribution of the final solutions tendsto become looser as the noise level is increased, but the overallperformance of the four algorithms can be considered approxi-mately equivalent. Local minima have a significant influence onthe final performance of the GAs also for the Schwefel bench-mark. In this case, it is interesting to note that both cGAs outper-form the population-based algorithms, being able to find the trueoptimum in about 40%–50% of the noise-free runs and about20%–30% in the noisy case, respectively. The Schwefel bench-mark is particularly troublesome for the rGA, which only oc-casionally (or never in the case of high noise) converges to thetrue optimum. In contrast, the results of the rcGA are particu-larly interesting, as they also slightly outperform the bcGA inboth noise-free and noisy cases. Similar results are obtained onthe Circle function, for which the performance of the rcGA inthe noise-free case is particularly remarkable. The cGAs exhibita relatively better ability to overcome multimodality also in theMichalewicz benchmark, although this time the bcGA is largelythe best performing algorithm, being able to hold a remarkableresult (over 60% of runs ending in global optimum) also in thenoisy case. Both bGA and rcGA are able to get closer to theoptimum of the Rosenbrock’s function within the allowed 1000fitness calls. The advantage of these algorithms becomes evenmore evident in the noisy case.

Page 6: Real-Valued Compact Genetic Algorithms For

208 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008

Fig. 3. Distribution of the final best solution found by the four compared GAs in each run for the Rastragin function. The distance from optimum isnormalized with respect to the width of the search interval. For all the algorithms, only few runs end sufficiently close to the true optimum, independentlyfrom the presence of noise. On the other hand, bGA and rcGA seem to be slightly more sensitive to noise.

A joint analysis of the results on the five benchmarkssuggests that, in low-dimensional optimization benchmarks,the two cGAs exhibit, in general, very satisfactory results,sometimes even more accurate and less sensitive to noise than

those of the sGAs. On the other hand, it is also quite foresee-able that the relative performance of the algorithms observedin these low-dimensional cases may dramatically change forlarger size problems involving higher order building blocks,

Page 7: Real-Valued Compact Genetic Algorithms For

MININNO et al.: REAL-VALUED COMPACT GENETIC ALGORITHMS FOR EMBEDDED MICROCONTROLLER OPTIMIZATION 209

Fig. 4. Distribution of the final best solution found by the four compared GAs in each run for the Schwefel function. The distance from optimum is normalized withrespect to the width of the search interval. The cGAs are evidently more effective than their population-based correspondents.

as often remarked in technical literature on cGAs. How-ever, the analysis of the performance of the rcGA on largersize optimization problems is not among the scopes of thismanuscript.

IV. MICROCONTROLLER DESIGN ON A SIMULATED PLANT

As previously mentioned, the cGAs have an algorithmicstructure that lends itself readily to online applications in which

Page 8: Real-Valued Compact Genetic Algorithms For

210 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008

Fig. 5. Distribution of the final best solution found by the four compared GAs in each run for the Circle function. The distance from optimum is normalized withrespect to the width of the search interval. The rcGA yields in this case the most remarkable performance, with over 400–800 runs ending in the true optimum regard-less of the amount of noise.

fitness measures are performed uninterruptedly on a hardwareprocess. Such a case is examined in the experimental investi-gations described in the remainder of this paper, which are allperformed on a dSPACE 1104 microcontroller board based ona 250 MHz Motorola Power PC microprocessor. The board,

specifically designed for measurement and control application,is equipped with several AD and DA converters, and is directlyprogrammable in the Matlab/Simulink object-oriented language.The following subsections describe two different benchmarksfor binary and real-valued cGAs.

Page 9: Real-Valued Compact Genetic Algorithms For

MININNO et al.: REAL-VALUED COMPACT GENETIC ALGORITHMS FOR EMBEDDED MICROCONTROLLER OPTIMIZATION 211

Fig. 6. Distribution of the final best solution found by the four compared GAs in each run for the Michalewicz function. The distance from optimum is normalizedwith respect to the width of the search interval. The bcGA is the best performing algorithm for this benchmark, in which both cGAs outperform their population-based equivalents.

A. Measuring Actual Computational Costs

Implementing a conventional population-based GA directlyon a microcontroller is not simple, for various reasons. As men-

tioned, a sGA requires the memory necessary to store the cur-rent population and an additional memory to perform selection,crossover, and mutation operations (and binary to real coding/decoding operations if present). Moreover, since the selection,

Page 10: Real-Valued Compact Genetic Algorithms For

212 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008

Fig. 7. Distribution of the final best solution found by the four compared GAs in each run for the Rosenbrock’s function. The distance from optimum is normalizedwith respect to the width of the search interval. The bGA and rcGA are the best performing algorithm for this benchmark.

crossover and mutation operations are concentrated at the endof an iteration, the computational cost is evenly distributed overtime. Such an algorithmic scheme causes peaks of computa-tional requirements that may overload the microcontroller atevery end of GA iteration. This, in turn, may generate con-

trol delays that are particularly undesirable in closed-loop feed-back schemes. All the cGAs can be effectively used to over-come these limitations, and in particular, the rcGA proposed inthis paper is even “lighter” in terms of actual computational re-quirements on a typical microcontroller. To assess the amount

Page 11: Real-Valued Compact Genetic Algorithms For

MININNO et al.: REAL-VALUED COMPACT GENETIC ALGORITHMS FOR EMBEDDED MICROCONTROLLER OPTIMIZATION 213

Fig. 8. Block diagram of the controlled plant with the reference PI controller.

TABLE IIEXECUTION TIME OF ONE ITERATION OF CGAS ON DS1104 BOARD

of benefits offered by the rcGA in this respect, we implementedthe bcGA and the rcGA on the microcontroller, and measuredthe average time needed by each algorithm to perform all thenumerical operations in one single iteration (from step 2 to step4 of the pseudocode in Fig. 2), except for the computation ofthe fitness. For completeness, we mention that the fitness func-tion for this benchmark is the Michalewicz function describedabove (with a progressively increased number of parameters),but it should be clear that the considered hardware configura-tion makes the result independent of the particular objectivefunction. Unless explicitly specified, all the configuration pa-rameters of the cGAs are the same as those used in the previoustests and summarized in Table I. Moreover, in the bcGA, eachreal-valued variable is converted in a 16 bit binary-coded string.Table II summarizes the results of these tests, showing the ratioof the iteration execution times of rcGA and bcGA. As expected,the rcGA is about 3 to 5 times faster, depending on the number ofparameters. The difference is particularly significant in our con-trol applications: the rcGA permits the use of shorter samplingtimes for the same parametrized law (or the implementation ofmore complex control laws at the same sampling time), or theuse of less-performing microcontroller for the same embeddedcontrol application.

B. Online Design of a PI Controller for a Simulated Plant

To provide repeatable results on a case study commonly avail-able in technical literature [6], [11], this section focuses on theonline optimization of the proportional and integral gains for adiscrete-time anti-windup PI controller applied to a nonlinearplant. As described in Fig. 8, the controlled plant is a linearsecond-order transfer function with an input saturation whenthe control signal exceeds the range . Fig. 8 alsoshows the reference PI controller which can be obtained using

Fig. 9. Schema of the cGA-based optimization.

the classical, model-based “Symmetrical Optimum” design ap-proach [11].

The overall scheme of the cGA-based design is illustrated inFig. 9. The differential equations describing the controlled non-linear plant are discretized with Euler method and a samplingstep equal to 3 ms, and the resulting difference equations, aswell as the cGA and the PI controller, are run simultaneously bythe microcontroller board, while an external PC is used for mon-itoring purposes. The speed reference is a square wave havingamplitude equal to 0.5 p.u. and period equal to 0.6 s. The ob-jective function (hereafter cost function) to be minimized by thecGAs is the integral of the absolute error (IAE) calculated com-paring the simulated DC motor speed response with the outputof the reference model (in Fig. 8) to the same reference input.The parameters optimized by the cGAs are the two Proportionaland Integral controller gains. The search is bounded in [0 3.5]and [0 10] for proportional and integral gain, respectively. Ateach speed step, the current IAE value (that is, the cost functionto be minimized) is passed to the cGA and the integrator is resetto zero. The cGA depicted in Fig. 9 is then executed every 0.3 s.The execution time of one full iteration (from step 2 to step 4 inFig. 2 except for the cost function measurements) of the rcGAsis 65 , while this time raises to 180 for the bcGA.

The main configuration parameters of the cGAs are selectedaccording to preliminary tuning experiments. The populationsize is set equal to 200 for both algorithms. The stopping cri-terion is 5000 cost function evaluations for all the tests. More-over, the maximum length of inheritance is set equal to 100.Under these general settings, we execute three tests with slightlydiffering configurations. The first and simplest test (hereaftertest a) considers a linear reference model (no actuator satura-tion) and a noise-free simulation. The second test (test b) isstill performed with a linear reference model but the output of

Page 12: Real-Valued Compact Genetic Algorithms For

214 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008

Fig. 10. Comparison of the simulated speed responses of the reference controller (model-based symmetrical optimum, continuous line) and the speed responseof a PI controller obtained with the rcGA (dotted line) and with the bcGA (dashed line). The perfect overlap of the three responses is also emphasized on theright-hand side on an expanded view of the overshoot.

Fig. 11. Comparison of average bcGA and rcGA evolution, in test a (linear, noise free), b (linear, noisy), and c (nonlinear, noisy), respectively. To emphasize thedifferences in terms of computational costs, the horizontal axis reports the actual computation time used by each cGA.

the DC motor is artificially corrupted at each sample time witha uniform random number ranging between of the ratedmotor speed. Moreover, the end of the evaluation of each indi-vidual (at every 0.3 s), a second uniform random number be-tween of the measured value of the IAE is also addedto the cost function, to further increase the amount of uncer-tainty on the simulated cost measurement. In particular, the twosources of noise simulate: 1) the inherently noisy measurementof the rotor speed due to transducer inaccuracy and interferenceswith other devices and 2) the imperfect repeatability of eachexperiment, which is a typical challenge of experimental hard-ware-in-the-loop optimization processes. Finally, in addition tothe noise sources of the test b, the third test (test c) considers anactuator with saturation outside the range, and a PIregulator with an antiwindup algorithm. This last test better ap-proximates the real-time experiment on an actual electric drive.Each test is repeated 100 times and the average results are sum-marized in Table III.

Table III highlights two interesting results. It can be immedi-ately noted that the rcGA outperforms the bcGA in terms of bothaverage cost function values, and smallest standard deviations.Moreover, considering the final values for controllers parame-ters, it can be also noted that the rcGA is able to converge tothe gains of the reference controller also in the noise-corruptedand nonlinear cases. Fig. 10 compares the speed response of thereference control loop with the response of one PI obtained by

TABLE IIIAVERAGE RESULTS FOR SIMULATED PI DESIGN (NOTATION STD

INDICATES STANDARD DEVIATION)

the rcGA. The differences between the two controllers are indis-tinguishable, even in the expanded view of transient responses[Fig. 10(b)].

The differences between the two cGAs in terms of speed ofconvergence and computational requirements are illustrated inFig. 11. It can be noted that the rcGA obtains a significantlylower average cost value in test a, while the average cost valuesfor tests b and c are almost equivalent, although the rcGA ismuch faster in converging towards the reference values. Anotherimportant advantage of rcGA over bcGA is the execution speed.As evidenced by Fig. 11, both algorithms converge after about2000 fitness calls, but the rcGA needs about 36% of the compu-tational requirement of a bcGA. Fig. 11 clearly highlights thatthe rcGA has a more “turbulent” explorative search due to the

Page 13: Real-Valued Compact Genetic Algorithms For

MININNO et al.: REAL-VALUED COMPACT GENETIC ALGORITHMS FOR EMBEDDED MICROCONTROLLER OPTIMIZATION 215

Fig. 12. Induction motor drive block diagram.

combination of nonpersistent elitism with a real-valued approx-imation of the probabilistic gene distribution.

V. EMBEDDED MICROCONTROLLER OPTIMIZATION

This section extends the ideas and investigations illustrated inthe previous sections to an experimental bench for the feedbackcontrol of a vector-controlled IM drive [4]. Fig. 12 illustratesthe overall schema of the system [11]. The current controllersare not subject to optimization, and are the same for all the ex-periments. The plant is a IM loaded using a torque controlledbrushless generator, mounted on the same shaft. The IM name-plate parameters are as follows: voltage , current

, power , speed ,torque , inertia , pole pairs

, and torque constant .In this case study, the microcontroller runs the cGAs, the

control laws and a supervision algorithm that interrupts the ex-perimentation of badly performing solutions. The various mod-ules have different sampling times. In particular, controller andstability supervisor run with a 200 sampling time, whilethe cGAs have a 0.75 s sampling time. Each run is stoppedafter about 6 min of search, but both algorithms converge innearly 3 min. According to preliminary investigations for al-gorithm configuration, in this case study we set the populationsize , and the maximum length of inheritancefor both cGAs . We also remark that the execution time of acomplete iteration of the vector control scheme (the four con-trollers in Fig. 12 and necessary coordinate transformations) isabout 100 , and one iteration of the cGA requires 65 forthe rcGA and about 180 for the bcGA. Since both cGAsare executed at every 3750 iterations of controller differenceequation, the increase of computer cost due to the introductionof the optimization algorithms is negligible, as it amounts to

of the cycle time for thercGA and 0.048% for the bcGA.

Performing experiments directly on hardware raises the fun-damental issue of the risks related to unstable or badly per-forming individuals, generated due to the inherently random

exploration of the GAs, especially during the early iterations.Testing such controllers is not necessary and often dangerous,since it may cause hardware damage. The use of linear con-trollers, combined with the availability of an accurate linearmodel of the plant makes possible, at least in principle, theanalysis of stability prior to the actual implementation of eachsolution. Obviously, due to the various unknown parametersand nonlinearities neglected in this process, the information onclosed-loop poles does not yield a fully reliable prediction ofeach controller’s stability. However, as foreseeable, controllerswith unstable model-based closed-loop poles seldom provideacceptable performances on the real motor. This informationcan be profitably used to define appropriate bounds of the searchspace for the two gains, but the theoretical guarantees of closed-loop stability on the actual hardware remain a matter of accuracyof the available model. In order to overcome the need of an ac-curate model, and simultaneously guarantee a sufficient level of“safety” during the genetic search, in this paper we have adoptedan heuristic strategy that effectively overcomes the limitationsof model-based analysis. Instead of computing the cost func-tion at the end of each evaluation interval, as usually done inonline tuning approaches, we compute each index contributingto the total cost function online, i.e., updating its value at eachtime sample of the experiment. The online value of each costterm is constantly monitored, and whenever it exceeds a prede-fined threshold the current experiment is immediately stopped.This allows us to detect unstable (or highly unsatisfactory) so-lutions well before the involved signals reach potentially dan-gerous values. In case a monitored index exceeds the prescribedthreshold, the value of the cost is multiplied by a penalty factorand assigned to the individual, and the algorithm proceeds withanother experiment. In this way, the evolutionary search is neverinterrupted until the terminating condition occurs. On average,only 2%–3% of the experiments are prematurely interrupted dueto bad performance, and mostly in the first iterations.

The design of the experiment for cost measurement also playsa fundamental role in the success of the evolutionary design. Inthis section, the speed reference signal is a square wave having

Page 14: Real-Valued Compact Genetic Algorithms For

216 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008

Fig. 13. Evolution of cost function and proportional gain over time in a sample run: currently tested individual is in thin line, current elitist in thick line. (a) Costfunction bcGA. (b) Cost function rcGA. (c) Proportional gain bcGA. (d) Proportional gain rcGA. (Figures about the evolution of the integral gain are very similar,and omitted for brevity.)

amplitude equal to 1 p.u. (between ) and periodequal to 1.5 s. After 0.5 s from every change of the referencesignal, a step change of load torque (from 0% to 70% of motorrated torque) is applied, in order to also evaluate the overall dis-turbance rejection. The profiles of both motor speed and con-trol action (the current isqref) are acquired during the experi-ment, and used to compute the cost function, which essentiallytakes into account three criteria, i.e., the tracking performance,the disturbance rejection, and the smoothness of the control ac-tion. In mathematical terms, the optimization problem is formu-lated as the minimization of a cost function obtained as the sumof two terms

(7)

where is the duration of the experiment for cost functionmeasurement, and the two functions and are defined as

ifotherwise

(8)

and

ifotherwise

(9)

The function takes into account the tracking performance inthe first part of the experiment, and the disturbance rejectionin the time interval in which the step load torque is applied.More specifically, the tracking error is considered only in thetime intervals in which the current feed is lower that a prede-fined threshold (the tracking error due to controller saturationcannot be further reduced). The second function comparesthe control action filtered by a first-order linear filter with time

TABLE IVAVERAGE RESULTS FOR EXPERIMENTAL PI DESIGN

(NOTATION STD INDICATES STANDARD DEVIATION)

Fig. 14. Comparison of controllers obtained with rcGA (first row of figures)and linear model-based design (second row). From left to right, the figure showthe overall speed response on one cost function evaluation experiment (a) and(c), and the corresponding control action (b) and (d).

constant (indicated in (9) as ), with the un-filtered actual action itself. As smoother control actionsgive lower values of the integral of , this index is intended to

Page 15: Real-Valued Compact Genetic Algorithms For

MININNO et al.: REAL-VALUED COMPACT GENETIC ALGORITHMS FOR EMBEDDED MICROCONTROLLER OPTIMIZATION 217

Fig. 15. (a) Comparison of controllers obtained with rcGA, (b) bcGA, and (c) linear model-based design: expanded view of the steady-state time interval of onecost function evaluation experiment when the load torque is applied. The expanded view of the symmetrical-optimum model-based design (c) also reports theclosed-loop response obtained on the approximated model used for design purposes. It can be noted that in spite of model accuracy (the modeling error seemsnegligible prior to the application of the load), the final result is clearly less effective than the PI obtained with cGAs.

penalize controllers with an excessively oscillatory control ac-tion, which may cause stresses for the IM producing vibrations,acoustic noise, and extra losses. At the end of the experimentinterval , the measured cost is passed to the cGA, and anew experiment is started.

The search interval is [0 1.5] for the proportional gain and[0 20] for the integral gain. The stopping criterion is the max-imum number of cost function calls chosen equal to 500 (i.e., thewhole experiment lasts 375 seconds, although all the algorithmsconverge in about half of this interval). The average results aresummarized in Table IV. Also, in this case, the rcGA is able toobtain better PI controllers, in spite of the higher standard de-viation of controller gain (noise, actuator saturation, and othernonlinearities in this case make the considered cost function lesssensitive to small variations of the controller gains).

Fig. 13 shows the evolution of the cost function and propor-tional gain in two typical runs of a bcGA and a rcGA. The effectsof nonpersistent elitism are clearly visible in the evolution ofboth algorithms, which have a remarkably similar behavior. Toprovide a quantitative comparison with other available designstrategies, Fig. 14 compares the result of a PI controller opti-mized by the rcGA with a PI designed with the IM model-basedsymmetrical-optimum approach. The design technique can besummarized as follows (further details can be found in [4] and[9]). The gains of the current controllers are selected so as toachieve a first-order closed-loop response with time constantequal to . In order to design the speed controller, theplant is approximated with a first-order system having time con-stant , i.e., the sum of all the delays found inthe speed control loop (current control , speed low-pass filter

, and delays due to the digital implementation of the con-trol scheme ). Once the first-order model is defined, it can beeasily shown [9] that the open loop transfer function reduces tothe following:

(10)

where and is the complex Laplace transform vari-able. Then, according to the design theory of the symmetricaloptimum [9], the gains of the PI can be obtained with the fol-lowing formula:

(11)

The differences between the model-based PI and thecGA-based PI are illustrated in Figs. 14 and 15. In particular,Fig. 15(c) compares the closed-loop responses obtained usingthe same model-based controller in simulation and during oneexperiment on the actual hardware. The similarity betweenthe two responses confirms that the mathematical model usedfor the symmetrical optimum design is fairly accurate. Never-theless, the model based procedure obtains a less satisfactoryresult (larger overshoot, larger effects of the load torque). It hasto be mentioned that it is a fairly common practice in all PIDmodel-based design methods to perform a final hand-tuningsession directly on the controlled hardware and refine theperformance of the closed-loop system. Thus, it is reasonableto believe that the symmetrical-optimum PI could be mademore satisfactory with hand-tuning refinements. On the otherhand, this possibility does not make the result of the cGA lesssignificant, since the latter is obtained with a fully automatedmodel-free procedure in less than 3 min.

VI. CONCLUSION

This paper proposed a variant of an effective elitist cGAworking with a real-valued PV. In a wide set of simulated andexperimental investigations, the variant exhibited satisfactorybehavior, with reduced computational requirements and re-peatable and generally accurate results, comparable to those ofother state-of-art compact and population-based GAs. As in thecase of population-based GAs, a real-coded implementation ofa cGA is an interesting tradeoff between simplicity of the code,interpretability of the PVs and chromosomes, and accuracy ofthe final solutions.

The efficacy of the proposed algorithm is confirmed by a real-world application in the context of embedded microcontrolleroptimization. On the proposed platform, the effects of the ac-tual and unknown high-order phenomena and nonlinearities arefully accounted in the cost measurement, and the final controlleris ready-for-use with known performance. The cGAs require avery low computational cost, and therefore can be directly im-plemented on the same processor running the control algorithm,overcoming the typical limitations related to the conventionalpopulation-after-population algorithmic scheme of sGA.

The presented research leaves many directions open for fur-ther investigations. In principle, the cGAs seem suitable to alsosolve more challenging online tuning problems, e.g., optimizing

Page 16: Real-Valued Compact Genetic Algorithms For

218 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 2, APRIL 2008

(A3)

a nonlinear interpolator such as a neural network. However, fur-ther research is necessary to investigate the potential limitationsrelated to the typical “order-one” behavior of cGAs in theselarger size optimization problems. Another promising directionregards the simultaneous optimization of the four controllers op-erating in the IM. Also, such a simultaneous design problem isparticularly challenging because of the unpredictable interac-tions between the various controllers.

APPENDIX

Derivation of the Update Rules: Let us consider a popula-tion of individuals and focus on one generic gene . Let usgroup the values of the gene in the population in the vector

, where is ageneric iteration index. Let us indicate with and themean and standard deviations of . Let us assume thatis subject to a binary tournament selection involving the genericindividuals and , where index corresponds to the winnerand to the looser. The vector of genes after the tournamentbecomes . Themean value of can be immediately computed as

(A1)

obtaining the straightforward update rule for the mean intro-duced in Fig. 2. Analogously, the standard deviation after tour-nament is defined as

(A2)

Substituting (A1) in (A2), we obtain equation (A3) at the topof the page. Expanding the squared terms in brackets, and aftersome straightforward manipulation, the latter equation can berewritten as

(A4)which is the update rule for the standard deviation used in thercGA in Fig. 2.

ACKNOWLEDGMENT

The authors wish to thank the anonymous reviewers, theassociate editor, and the editor-in chief for their valuablesuggestions.

REFERENCES

[1] C. W. Ahn and R. S. Ramakrishna, “Elitism based compact geneticalgorithms,” IEEE Trans. Evol. Comput, vol. 7, no. 4, pp. 367–385 ,Aug. 2003.

[2] R. Baraglia, J. I. Hidalgo, and R. Perego, “A hybrid heuristic for thetraveling salesman problem,” IEEE Trans. Evol. Comput., vol. 5, no. 6,pp. 613–622, Dec. 2001.

[3] W. J. Cody, “Rational Chebyshev approximations for the error func-tion,” Math. Comp., vol. 8, pp. 631–638, 1969.

[4] F. Cupertino, E. Mininno, D. Naso, B. Turchiano, and L. Salvatore,“On-line genetic design of anti-windup unstructured controllers forelectric drives with variable load,” IEEE Trans. Evol. Comput., vol. 8,4, pp. 347–364, Aug. 2004.

[5] F. Cupertino, G. L. Cascella, L. Salvatore, and N. Salvatore, “A simplestator flux oriented induction motor control,” EPE Journal, vol. 15, no.3, 2005.

[6] F. Cupertino, A. Lattanzi, and L. Salvatore, “A new fuzzy logic-basedcontroller design method for DC and AC impressed-voltage drives,”IEEE Trans. Power Electron., vol. 15, no. 6, pp. 974–982, Nov. 2000.

[7] D. B. Fogel, Evolutionary Computation. Piscataway, NJ: IEEE Press,1995.

[8] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Ma-chine Learning. Reading, MA: Addison-Wesley, 1989.

[9] H. Grob, J. Hamann, and G. Wiegartner, Electrical Feed Drives in Au-tomation. Alpharetta, GA: Siemens, 2001.

[10] G. Harik, F. G. Lobo, and D. E. Goldberg, “The compact genetic algo-rithm,” IEEE Trans. Evol. Comput., vol. 3, pp. 287–297, Nov. 1999.

[11] W. Leonhard, Control of Electrical Drives. Berlin, Germany:Springer-Verlag, 1985.

[12] Z. Michalewicz, Genetic Algorithms + Data Structures = EvolutionaryPrograms, 2nd ed. New York: Springer-Verlag, 1994.

[13] S. D. Müller, J. Marchetto, S. Airaghi, and P. Koumoutsakos, “Opti-mization based on bacterial chemotaxis,” IEEE Trans. Evol. Comput.,vol. 6, pp. 16–29, Feb. 2002.

[14] G. Rudolph, “Self-adaptive mutations may lead to premature conver-gence,” IEEE Trans. Evol. Comput., vol. 5, pp. 410–414, Aug. 2001.

[15] J. C. Spall, Introduction to Stochastic Search and Optimization: Esti-mation, Simulation, and Control. New York: Wiley, 2003.

[16] A. Törn and A. Zilinskas, Global Optimization. Berlin, Germany:Springer-Verlag, 1989, vol. 350, Lecture Notes in Computer Science.

[17] The Matlab Users Guide The MathWorks, Inc., Natich, MA, 2006. [On-line]. Available: www.mathworks.com

[18] P. Larrañaga and J. A. Lozano, Eds., Estimation of Distribution Al-gorithms: A New Tool for Evolutionary Computation. Norwell, MA:Kluwer, 2001.

[19] M. Sebag and A. Ducoulombier, “Extending population-based incre-mental learning to continuous search spaces,” in Parallel ProblemSolving from Nature, PPSN, V. A. E. Eiben, Ed. et al. Amsterdam:Springer, 1998, pp. 418–427.

[20] M. Gallagher, “An empirical investigation of the user-parametersand performance of continuous PBIL algorithms [population-basedincremental learning],” in Proc. IEEE Signal Process. Soc. Workshopon Neural Netw.Signal Processing X, Dec. 11–13, 2000, vol. 2, pp.702–710.

[21] B. Yuan and M. Gallagher, “Playing in continuous spaces: Some anal-ysis and extension of population-based incremental learning,” in Proc.IEEE Congr. Evol. Comput., Dec. 2003, vol. 1, pp. 443–450.

[22] M. Schmidt, K. Kristensen, and T. Randers Jensen, “Adding geneticsto the standard PBIL algorithm,” in Proc. IEEE Congr. Evol. Comput.,Jul. 6–9, 1999, vol. 2.

[23] C. Gonzailez, J. A. Lozano, and P. Larranaga, “Mathematical modelingof ���� algorithm with tournament selection: Behavior on linearand quadratic functions,” Int. J. Approximate Reasoning, vol. 31, no. 3,pp. 313–340, 2002.

[24] D. Y. Cho and B. T. Zhang, “Continuous estimation of distributionalgorithms with probabilistic principal component analysis,” in Proc.IEEE Congr. Evol. Comput., May 27–30, 2001, vol. 1, pp. 521–526.

Page 17: Real-Valued Compact Genetic Algorithms For

MININNO et al.: REAL-VALUED COMPACT GENETIC ALGORITHMS FOR EMBEDDED MICROCONTROLLER OPTIMIZATION 219

[25] C. Yunpeng, X. Sun, and P. Jia, “Probabilistic modeling for continuousEDA with Boltzmann selection and Kullback-Leibeler divergence,” inProc. GECCO, 8th Annu. Conf. Genetic Evol. Comput., 2006, vol. 1,pp. 389–396.

[26] J. C. Gallagher, S. Vigraham, and G. Kramer, “A family of compactgenetic algorithms for intrinsic evolvable hardware,” IEEE Trans. Evol.Comput., vol. 8, pp. 111–126, Apr. 2004.

[27] P. J. Fleming and R. C. Purshouse, “Evolutionary algorithms in con-trol system engineering: A survey,” Control Eng. Practice, vol. 10, pp.1223–1241, 2002.

[28] Y. Hong, Q. Ren, and J. Zeng, “Adaptive population size for univariatemarginal distribution algorithm,” in Proc. IEEE Congr. Evol. Comput.,Sep. 2–5, 2005, vol. 2, pp. 1396–1402.

[29] T. Goslin, N. Jin, and E. Tsang, “Population based incrementallearning with guided mutation versus genetic algorithms: Iteratedprisoners dilemma,” Proc. IEEE Congr. Evol. Comput., vol. 1, pp.958–965, Sep. 2–5, 2005.

Ernesto Mininno was born in Bari, Italy, on De-cember 1977. He received the M.Sc. and Ph.D.degrees in electrical engineering from the Politec-nico di Bari, Bari, in 2002 and 2007, respectively.

Since December 2003, he has been with the ItalianNational Research Council (CNR) and Sintesi-scpa,working in the field of industrial automation. Heis also currently working with the Converters,Electrical Machines, and Drives Research Team atthe Polytechnic of Bari. His main research interestsfocus on intelligent control of electrical machines

and real-time control systems for industrial robotics

Francesco Cupertino was born in Italy on De-cember 1972. He received the Laurea degree andthe Ph.D. degree in electrical engineering from theTechnical University of Bari, Bari, Italy, in 1997 and2001, respectively.

From 1999 to 2000, he was with the PEMCResearch Group, University of Nottingham,Nottingham, U.K. Since July 2002, he has been anAssistant Professor with the Electrical and ElectronicEngineering Department, Technical University ofBari. He teaches several courses in electrical drives

at the Technical University of Bari. His main research interests cover theintelligent motion control, design, and fault diagnosis of electrical machines.He is the author or coauthor of more than 50 scientific papers on these topics.

David Naso received the Laurea degree (Hons.) inelectronic engineering and the Ph.D. degree in elec-trical engineering from the Polytechnic of Bari, Bari,Italy, in 1994 and 1998, respectively.

He was a Guest Researcher with the OperationResearch Institute, Technical University of Aachen,Aachen, Germany, in 1997. Since 1999, he has beenan Assistant Professor of Automatic Control andTechnical Head of the Robotics Laboratory at theDepartment of Electric and Electronic Engineering,Polytechnic of Bari. His research interests include

computational intelligence and its application to control, industrial automationand robotics, numerical and combinatorial optimization, and distributed controlof manufacturing systems. He is author of 78 journal and conference papers onthese topics.