17
Immune secondary response and clonal selection inspired optimizers Maoguo Gong a, * , Licheng Jiao a , Lining Zhang a , Haifeng Du b a Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education of China, Institute of Intelligent Information Processing, Xidian University, Xi’an 710071, China b School of Public Policy and Administration, Xi’an Jiaotong University, Xi’an 710049, China Received 7 March 2008; received in revised form 12 May 2008; accepted 19 May 2008 Abstract The immune system’s ability to adapt its B cells to new types of antigen is powered by processes known as clonal selection and affinity maturation. When the body is exposed to the same antigen, immune system usually calls for a more rapid and larger response to the antigen, where B cells have the function of negative adjustment. Based on the clonal selection theory and the dynamic process of immune response, two novel artificial immune system algorithms, secondary response clonal programming algorithm (SRCPA) and secondary response clonal multi-objective algorithm (SRCMOA), are presented for solving single and multi-objective optimization problems, respectively. Clonal selection operator (CSO) and secondary response operator (SRO) are the main operators of SRCPA and SRCMOA. Inspired by the clonal selection theory, CSO reproduces individuals and selects their improved maturated progenies after the affinity mat- uration process. SRO copies certain antibodies to a secondary pool, whose members do not participate in CSO, but these antibodies could be activated by some external stimulations. The update of the secondary pool pays more attention to maintain the population diversity. On the one hand, decimal-string representation makes SRCPA more suitable for solving high-dimensional function optimiza- tion problems. Special mutation and recombination methods are adopted in SRCPA to simulate the somatic mutation and receptor edit- ing process. Compared with some existing evolutionary algorithms, such as OGA/Q, IEA, IMCPA, BGA and AEA, SRCPA is shown to be able to solve complex optimization problems, such as high-dimensional function optimizations, with better performance. On the other hand, SRCMOA combines the Pareto-strength based fitness assignment strategy, CSO and SRO to solve multi-objective optimization problems. The performance comparison between SRCMOA, NSGA-II, SPEA, and PAES based on eight well-known test problems shows that SRCMOA has better performance in converging to approximate Pareto-optimal fronts with wide distributions. Ó 2008 National Natural Science Foundation of China and Chinese Academy of Sciences. Published by Elsevier Limited and Science in China Press. All rights reserved. Keywords: Clonal selection; Secondary response; Artificial immune systems; Evolutionary algorithms 1. Introduction Artificial immune systems (AIS) make use of the mech- anism of vertebrate immune system in terms of the model of information processing, and they construct new intelli- gent methods for solving problems. These methods provide the evolutionary learning mechanism like robustness, unsu- pervised learning, self-organization, and memory. There- fore, AIS have received a significant amount of interest from researchers and industrial sponsors in the recent years. Some of the first work in applying immune system paradigms was undertaken in the area of fault diagnosis [1]. Later work applied immune system paradigms to the field of computer security [2], which seemed to act as a cat- alyst for further investigation of the immune system as a metaphor in many areas, such as anomaly detection [3,4], pattern recognition [5–8], job shop scheduling [9,10], opti- mization [11–13] and engineering design optimization [14,15]. As de Castro and Timmis said, the field of AIS is showing great promise of being powerful computing para- digms [16]. 1002-0071/$ - see front matter Ó 2008 National Natural Science Foundation of China and Chinese Academy of Sciences. Published by Elsevier Limited and Science in China Press. All rights reserved. doi:10.1016/j.pnsc.2008.05.026 * Corresponding author. Tel.: +86 29 88202661; fax: +86 29 88201023. E-mail address: [email protected] (M. Gong). www.elsevier.com/locate/pnsc Available online at www.sciencedirect.com Progress in Natural Science 19 (2009) 237–253

SIA-_2009_Gong

Embed Size (px)

DESCRIPTION

Gong

Citation preview

  • cl

    a,

    ge

    sing

    School of Public Policy and Administration, Xian Jiaotong University, Xian 710049, China

    pervised learning, self-organization, and memory. There-fore, AIS have received a signicant amount of interest

    pattern recognition [58], job shop scheduling [9,10], opti-mization [1113] and engineering design optimization[14,15]. As de Castro and Timmis said, the eld of AIS isshowing great promise of being powerful computing para-digms [16].

    * Corresponding author. Tel.: +86 29 88202661; fax: +86 29 88201023.E-mail address: [email protected] (M. Gong).

    Available online at www.sciencedirect.com

    Progress in Natural Science 19Keywords: Clonal selection; Secondary response; Articial immune systems; Evolutionary algorithms

    1. Introduction

    Articial immune systems (AIS) make use of the mech-anism of vertebrate immune system in terms of the modelof information processing, and they construct new intelli-gent methods for solving problems. These methods providethe evolutionary learning mechanism like robustness, unsu-

    from researchers and industrial sponsors in the recentyears. Some of the rst work in applying immune systemparadigms was undertaken in the area of fault diagnosis[1]. Later work applied immune system paradigms to theeld of computer security [2], which seemed to act as a cat-alyst for further investigation of the immune system as ametaphor in many areas, such as anomaly detection [3,4],Received 7 March 2008; received in revised form 12 May 2008; accepted 19 May 2008

    Abstract

    The immune systems ability to adapt its B cells to new types of antigen is powered by processes known as clonal selection and anitymaturation. When the body is exposed to the same antigen, immune system usually calls for a more rapid and larger response to theantigen, where B cells have the function of negative adjustment. Based on the clonal selection theory and the dynamic process of immuneresponse, two novel articial immune system algorithms, secondary response clonal programming algorithm (SRCPA) and secondaryresponse clonal multi-objective algorithm (SRCMOA), are presented for solving single and multi-objective optimization problems,respectively. Clonal selection operator (CSO) and secondary response operator (SRO) are the main operators of SRCPA and SRCMOA.Inspired by the clonal selection theory, CSO reproduces individuals and selects their improved maturated progenies after the anity mat-uration process. SRO copies certain antibodies to a secondary pool, whose members do not participate in CSO, but these antibodiescould be activated by some external stimulations. The update of the secondary pool pays more attention to maintain the populationdiversity. On the one hand, decimal-string representation makes SRCPA more suitable for solving high-dimensional function optimiza-tion problems. Special mutation and recombination methods are adopted in SRCPA to simulate the somatic mutation and receptor edit-ing process. Compared with some existing evolutionary algorithms, such as OGA/Q, IEA, IMCPA, BGA and AEA, SRCPA is shown tobe able to solve complex optimization problems, such as high-dimensional function optimizations, with better performance. On the otherhand, SRCMOA combines the Pareto-strength based tness assignment strategy, CSO and SRO to solve multi-objective optimizationproblems. The performance comparison between SRCMOA, NSGA-II, SPEA, and PAES based on eight well-known test problemsshows that SRCMOA has better performance in converging to approximate Pareto-optimal fronts with wide distributions. 2008 National Natural Science Foundation of China and Chinese Academy of Sciences. Published by Elsevier Limited and Science inChina Press. All rights reserved.Immune secondary response and

    Maoguo Gong a,*, Licheng JiaoaKey Laboratory of Intelligent Perception and Ima

    Institute of Intelligent Information Procesb1002-0071/$ - see front matter 2008 National Natural Science Foundation oand Science in China Press. All rights reserved.

    doi:10.1016/j.pnsc.2008.05.026onal selection inspired optimizers

    Lining Zhang a, Haifeng Du b

    Understanding of Ministry of Education of China,

    , Xidian University, Xian 710071, China

    www.elsevier.com/locate/pnsc

    (2009) 237253f China and Chinese Academy of Sciences. Published by Elsevier Limited

  • tion problem. In order to compare these algorithmsclearly, we summarize them in Table 1 where SOPs areshort for single-objective optimization problems andMOPs are short for multi-objective optimizationproblems.

    In this paper, based on the clonal selection theory andthe dynamic process of immune response, two novel arti-cial immune system algorithms, called secondaryresponse clonal programming algorithm (SRCPA) andsecondary response clonal multi-objective optimizationalgorithm (SRCMOA), are presented. Clonal selectionoperator (CSO) and secondary response operator (SRO)are the main operators of SRCPA and SRCMOA.Inspired by the clonal selection theory, CSO performs agreedy search which reproduces individuals and selectstheir improved maturated progenies after the anity mat-

    tural Science 19 (2009) 237253The immune systems ability to adapt its B cells to thenew types of antigen is powered by processes known asclonal selection and anity maturation [17]. Mostimmune system-inspired optimization algorithms arebased on the applications of the clonal selection principle[18]. Clonal selection algorithms have been popularizedmainly by de Castro and Von Zubens CLONALG [6].CLONALG selects part ttest antibodies to clone propor-tionally to their antigenic anities. The hypermutationoperator generates the matured clone population by per-forming an anity maturation process inversely propor-tional to the tness values. After computing theantigenic anity of the matured clone population, CLO-NALG creates randomly part new antibodies to replacethe lowest tness antibodies in current population andretain best antibodies to recycle. Subsequently, de Castroand Timmis proposed an articial immune network calledopt-aiNet [19] for multimodal optimization. In opt-aiNet,antibodies are part of an immune network and the deci-sion about the individual which will be cloned, suppressedor maintained depends on the interaction established bythe immune network. Kelsey and Timmis proposed theB cell Algorithm (BCA) in Ref. [20]. Through a processof evaluation, cloning, mutation and selection, BCAevolves a population of individuals (B cells) towards aglobal optimum. Each member of the B cell populationcan be considered as an independent entity. Garrett haspresented an attempt to remove all the parameters fromthe clonal selection algorithm [21]. This method, whichis called ACS for short, attempts to self-evolve variousparameters during a single run. Cutello, Nicosia and Pav-one proposed an immune algorithm for optimizationcalled opt-IA [22,23]. Opt-IA uses three immune opera-tors, i.e. cloning, hypermutation and aging. In hypermuta-tion operator, the number of mutations is determined bymutation potential. The aging operator eliminates oldindividuals to avoid premature convergence. Opt-IA alsouses a standard evolutionary operator, (l+k)-selectionoperator. As far as multi-objective optimization is con-cerned, MISA [24,25] may be the rst attempt to solvegeneral multi-objective optimization problems using arti-cial immune systems. MISA encodes the decision vari-ables of the problem to be solved by binary strings,clones the Pareto-optimal and feasible solutions, andapplies two types of mutation to the clones and otherindividuals, respectively. More recently, Freschi and Rep-etto [26] proposed a vector articial immune system(VAIS) for solving multi-objective optimization problemsbased on the opt-aiNet. VAIS adopted the owchart ofopt-aiNet and the tness assignment method in SPEA2with some simplication that for Pareto-optimal individu-als the tness is the strength dened in SPEA2 and fordominated individuals the tness is the number of individ-uals which dominate them. Cutello, Narzisi and Nicosia[27] modied the (1+1)-PAES using two immune inspired

    238 M. Gong et al. / Progress in Naoperators, cloning and hypermutation, and applied theimproved PAES to solving the protein structure predic-uration process, and so single individuals will be opti-mized locally and the newcomers yield a broaderexploration of the search space. SRO copies certain anti-bodies to a secondary pool, whose members do not par-ticipate in CSO, but they could be activated by someexternal stimulation. The update of the secondary poolpays more attention to maintain the population diversity.SRCPA adopts decimal representation rather than binaryrepresentation and uses special mutation and recombina-tion methods designed for large parameter optimizationproblems. The experimental results indicate that SRCPAconverges faster than some existing evolutionary algo-rithms, such as BGA [28], AEA [29], OGA/Q [30], IEA[31] and IMCPA [32], for solving high-dimensional func-tion optimization problems. SRCMOA tries to get morevarious non-dominated solutions by combining CSO,SRO and previous MOEAs techniques such as Pareto-strength based tness assignment strategy [33]. The per-formance comparison between SRCMOA, NSGA-II[34], SPEA [35], and PAES [36] based on the eight well-known test problems shows that SRCMOA has a goodperformance in converging to approximate Pareto-optimalfronts with wide distributions.

    Table 1Main optimization algorithms inspired by immune system.

    Algorithm Key operators Applications

    CLONALG Elitist selection, cloning, mutation, death SOPsOpt-aiNet Elitist selection, cloning, mutation, network

    suppression, deathSOPs

    BCA Cloning, metadynamics, contiguousmutation, selection

    SOPs

    ACS Elitist selection, cloning, mutation, death SOPsopt-IA l k-selection, cloning, hypermutation,

    hypermacromutation, agingSOPs

    MISA Elitist selection according to tness anddiversity, cloning, uniform mutation andnon-uniform mutation, archive update

    MOPs

    VAIS Cloning, mutation, clonal selection,suppression, death, archive update

    MOPsI-PAES Cloning, mutation, clonal selection, (1+1)-selection, archive update

    MOPs

  • 2. Immunology background and terms

    2.1. Clonal selection and immune response

    The ability of the immune system to respond to an anti-gen exists before it ever encounters that antigen. Theimmune system relies on the prior formation of an incredi-bly diverse population of B cells and T cells. The specicityof both the B-cell receptors (BCRs) and the T-cell receptors(TCRs), i.e. the epitope to which a given receptor can bind,is created by a remarkable genetic mechanism. Each recep-

    tors determining the communication degree include cogniz-ing or mistake, memory or forgetting, and knowledge orignorance. The recent studies primarily clarify that the abil-ity of organism in selecting and recognizing adventitiousantigens comes from the selection and deletion of lympho-cytes during the growth and dierentiation process ofimmune system. The change of antibodies quality andquantity is closely inuenced by antigens, B and T cells,and other accessorial cells. B cells have the function of neg-ative adjustment, maybe clonal deletion or clonal anergy[37]. If the dierence between B cells and antibodies is

    M. Gong et al. / Progress in Natural Science 19 (2009) 237253 239tor is created even though the epitope it recognizes maynever have been present in the body. If an antigen with thatepitope should enter the body, those few lymphocytes ableto bind to it will do so. If they also receive the second co-stimulatory signal, they may begin repeated rounds of mito-sis. In this way, clones of antigen-specic lymphocytes (Band T) provide the basis of the immune response. This phe-nomenon is called clonal selection [18], which is a dynamicprocess of the immune system self-adaptive against antigenstimulation. According to Burnet, biological clonal selec-tion occurs according to the degree that a B-cell matchesan antigen. A strong match causes a B-cell to be clonedmany times, and a weak match results in little cloning.

    As shown in Fig. 1, after recovering from an infection,the concentration of antibodies against the infectious agentgradually declines over the ensuing weeks, months, or evenyears. A time may come till antibodies against that agentcan no longer be detected. Nevertheless, the individualoften is still protected against the second case of the dis-ease, i.e. the person is still immune. In fact, the secondexposure to the agent usually calls for a more rapid and lar-ger response to the antigen. This is called the secondaryresponse. The secondary response reects a larger numberof antigen-specic cells than those existed before the pri-mary response. During the initial expansion of clones,some of the progeny cells neither went on dividing nordeveloped into plasma cells. Instead, they reverted to smalllymphocytes bearing the same BCR on their surface thattheir ancestors had. This lays the foundation for a morerapid and massive response when the next time the antigenenters the body.

    In immunology, the association among immune systemsis usually described as links of communication, and the fac-Fig. 1. Primary and secondignored, the action of an antibody may be proliferation,anergy or deletion during the immune response.

    2.2. Terms in this paper

    In order to describe the algorithms well, we dene thenotations in our algorithms as follows, where boldface ita-lic capital letters indicate matrices and boldface italic smallletters indicate vectors.

    (i) Antigen: the objective function(s) to be optimized(maximized or minimized).

    (ii) Antibody: the representation of solution candidates.The limited-length character string a is the coding of vari-able vector x, denoted by a = encode(x); and x is called thedecoding of antibody a, expressed as x = decode(a).

    Set I is called antibody space, namely a 2 I . The anti-body population space is

    In A : A a1; a2; . . . ; an; ak 2 I ; 1 6 k 6 nf g; 1where the positive integer n is the size of antibody popula-tion, and the antibody population A fa1; a2; . . . ; ang is ann-dimensional group of antibody a, and it is a spot in theantibody space I.

    (iii) Anity: the tness measurement for an antibody.For single-objective optimization problems, the anityF P 0 is a linear or nonlinear transformation of thevalue of the objective function f for a given antibody.For multi-objective optimization problems, the anityF is computed based on the Pareto-strength based tnessassignment strategy in Ref. [33] but with a modication ondiversity preservation, which will be described in Section4.2, and F is to be maximized both in SRCPA and inSRCMOA.ary immune responses.

  • 3. Clonal selection operator and secondary response operator

    The mechanism of immune response and its characteris-tics, such as selfnon-self recognition, clonal selection,memory learning, antibody diversity, immune toleranceand adaptive regulation, are all important paradigms ofthe research on AIS. Clonal selection operator (CSO) andsecondary response operator (SRO) are two operators sim-ulating the dynamic process of immune response. CSO,including clonal proliferation operation, anity maturationoperation, and clonal selection operation, reproduces anti-bodies and selects their improved progenies after the anity

    tion operation of the secondary pool. A diversity mainte-

    Ak!TCPYk!T

    AMZk [ Ak!T

    CSAk 1 2

    where Y ik T CP aik ei aik; i 1; 2; . . . ; n, andei is a qi-dimensional identity column vector.

    There are various methods for calculating qi. In thisstudy, we compute it as follows:

    qik N c F aikPnj1F ajk

    & i 1; 2; . . . ; n 4

    where Nc > n is a given integer called clonal size. The valueof qi(k) is proportional to the value of F(ai(k)).

    After clonal proliferation, the population becomes

    Yk Y1k;Y2k; . . . ;Ynkf g 5

    yijk aik; j 1; 2; . . . ; qi i 1; 2; . . . ; n 6

    zijk TAMyijk; j 1; 2; . . . ; qi i 1; 2; . . . ; n 8

    240 M. Gong et al. / Progress in Natural Science 19 (2009) 237253The main operation of CSO is shown in Fig. 2.

    3.1.1. Clonal proliferation operation

    Dene

    Yk T CP Ak T CP a1kT CPa2k; . . . ; T CP ankT3nance strategy in the update of the secondary pool canmaintain the population diversity eciently.

    3.1. Clonal selection operator

    Inspired by the clonal selection theory, the CSO imple-ments the clonal proliferation operation (T CP ), anity mat-uration operation (TAM) and clonal selection operation (T

    CS )

    on the antibody population Ak, where the antibody pop-ulation at time k is represented by the time-dependent var-iable matrix Ak fa1k; a2k; . . . ; ankg. The evolutionprocess can be described asmaturation process, respectively, and so single individualswill be optimized locally and the newcomers yield a broaderexploration of the search space. SRO copies certain antibod-ies to the secondary pool and the antibodies in the secondarypool do not participate in CSO, but they could be activatedby some external stimulation, which is denoted as the activa-Fig. 2. The main operati3.1.3. Clonal selection operation

    Dene 8i 1; 2; . . . ; n; bik 2 Z ik is the best anti-body (the antibody with the highest anity) in Zi(k), then3.1.2. Anity maturation operation

    Inspired by immune response process, the anity matu-ration operation TAM is diversied basically by two mecha-nisms: hypermutation and receptor editing [12,38,39].

    Random changes are introduced into the genes, i.e.mutation. Such changes may lead to an increase in theanity of the clonal antibody occasionally. Antibodieshad deleted their low-anity receptors (genes) and devel-oped entirely new ones through recombination, i.e. recep-tor editing [12]. Receptor editing oers the ability toescape from local optima on an anity landscape.

    After anity maturation operation, the populationbecomes

    Zk fZ1k;Z2k; . . . ;Znkg 7where

    Z ik fzijkg fzi1k; zi2k; . . . ; ziqikg andwhere

    Y ik fyijkg fyi1k; yi2k; . . . ; yiqikg andonal process of CSO.

  • there are two cases to be considered. In the rst case,

    turaik 1 T CS Z ik [ aik

    bik if F aik < F bikaik if F aikP F bik

    9

    The newcome population is

    Ak 1 T CS Zk [ Ak T CS Z1k [ a1k; T CS Z2k [ a2k; . . . ;

    T CS Znk [ ankT fa1k 1; a2k 1; . . . ; ank 1g 10

    where aik 1 T CS Z ik [ aik i 1; 2; . . . ; n.Similar to the l k evolution strategies, i.e.

    l k ES, CSO also selects new populations amongthe ospring and parents together and employsself-adapting genetic variation mechanisms. But here itis necessary to point out that CSO has its owncharacteristics.

    Firstly, the clonal selection operation T CS in CSO isdierent from the truncation selection in l k ES[40]. T CS selects the best single individual ai(k+1) from thesub-population Z ik [ aik, respectively, and all the bestindividuals aik 1; i 1; 2; . . . ; n constitute the newpopulation. Therefore, single individuals will be optimizedlocally (exploitation of the surrounding space) by the an-ity maturation process. However, truncation selection withdeterministic choice of the best l individuals from a set of kospring and l parents emphasizes on the global selectionon the whole population.

    Secondly, l k ES often uses Gaussian or Cauchymutations and deterministic rules to control the step size.For CSO, the mutation is implemented based on somesomatic mutation-inspired methods.

    In fact, the mechanism of CSO is based on the clonalselection principle rather than on the natural evolution.

    3.2. Secondary response operator

    In this study, we construct a secondary pool, which iscalled SP for short. The SP is a special antibody populationwhose members are not involved in CSO. The initialSP M0 fm10;m20; . . . ;ms0g, where mi0; i 1; 2; . . . ; s, is generated randomly. The SP at time k is rep-resented by the time-dependent variable matrix Mk fm1k;m2k; . . . ;mskg.

    SRO includes update operation of SP, and activationoperation of SP. Since the problem features of single-objec-tive and multi-objective optimization are dierent, wedesign dierent update operations of SP for SRCPA andSRCMOA, respectively.

    3.2.1. Update operation of SPFor single-objective optimization problems, dene

    ak 1 2 Ak 1 as the best antibody in antibody pop-

    M. Gong et al. / Progress in Naulation A(k+1), then the update operation of SP can bedescribed as follows.dj0 6 d0, where d0 is the minimum distance between everytwo individuals of Mk, then compare the anities ofak 1 and mj(k), and let>

    mjk 1 ak 1 if F ak 1 > F mjkmjk else

    11

    In the second case, dj0 > d0, then add the antibodyak 1 to SP, and delete the worst individual of SP.

    For multi-objective optimization problems, SP pre-serves the current non-dominated individuals of the anti-body population. So the update operation of SP isperformed by copying all the non-dominated antibodiesfrom the combined population of A(k) and M(k) toM(k+1). The individuals of SP can survive for severalgenerations, unless they are removed when dominatedby other new members or the current number of non-dominated individuals s0 exceeds its upper limited s. If s0

    is larger than s, then excess individuals should be dis-carded using the environmental selection strategy in Ref.[33]. Therefore, SP serves as an elite set which is usefulin maintaining the diversity and in improving the perfor-mance of SRCMOA.

    After the update operation, the SP becomesMk 1 fm1k 1;m2k 1; . . . ;msk 1g.

    3.2.2. Activation operation of SP

    The individuals of SP can retroact at antibodypopulation. The activation operation of Mk fm1k;m2k; . . . ;mskg on the antibody population Ak fa1k; a2k; . . . ; ankg is described as follows: randomlyselect t bT% sc antibodies of M(k+1) to replace theworst t antibodies of A(k), where T% is a constant calledactivation ratio and s is the population size of SP. It canbe implemented as the following Algorithm 1:

    Algorithm 1. Activation operation of SP

    Step 1: Sort the antibodies in population A(k) as theirafnities and set A0k fa01k; a02k; . . . ;a0nkg to the resulting population, whereF a0ikP F a0i1k i 1; 2; . . . ; n 1.

    Step 2: Calculate the number of antibodies that shouldbe activated: t bT% sc.

    Step 3: Antibody population updating: Ak fa01k;a02k; ; a0ntk;m1k;m2k; ;mtkg.

    4. SRCPA and SRCMOA

    Based on CSO and SRO, two algorithms are proposedCalculating the distance between ak 1 and eachmember of M(k). If the distance between mj(k) andak 1 is the minimum one, and its value is dj0, then

    al Science 19 (2009) 237253 241in this section. SRCPA is designed for solving single-objec-tive optimization problems and SRCMOA is designed for

  • tursolving multi-objective function optimization problems.The main power of SRCPA and SRCMOA arises mainlyfrom CSO and SRO. In addition, decimal representationmakes SRCPA more suitable for solving high-dimensionalfunction optimization problems than other clonal selectionalgorithms with binary representation. Special mutationand recombination methods are adopted in SRCPA to sim-ulate the hypermutation and receptor editing process.SRCMOA adopts the Pareto strength based tness assign-ment strategy.

    4.1. Secondary response clonal programming algorithm

    SRCPA is designed for being an ecient general-pur-pose optimization algorithm for solving large-scale optimi-zation problems. We consider the following optimizationproblem

    maximize f x; x x1; x2; . . . ; xD 2 X 12where f(x) is the objective function, x is a variable vector inhD, X# hD denes the feasible solution space which is a D-dimensional space bounded by the parametric constraintsxi 6 xi 6 xi; i 1; 2; . . . ;D. Thus, the feasible solutionspace X x; x, where x x1; x2; . . . ; xD andx x1;x2; . . . ;xD.

    4.1.1. Antibody representation and anity measure

    In this study, an antibody represents a search point in fea-sible solution space. For the single-objective optimizationproblem described in Eq. (12), an antibody a a1; a2;. . . ; al encodex is a normalized real-valued string, i.e.a encodex x x:=x x 13and so l = D, and 0 6 ai 6 1; i 1; 2; . . . ;D.

    We adopt real-valued representation rather than bin-ary representation since the test problems that we aredealing with have continuous spaces, and real encodingshould be preferred to avoid problems related to ham-ming clis.

    For the single-objective optimization problem describedin Eq. (12), the anity measure F(a) > 0 for antibody a is amapping of the value of the objective function f(x) wherea encodex; x 2 X. In this study, we adopt the followingmapping function:

    F a ef decodea 14

    The exponential function is an increasing function and itholds that F(a) > 0 for all antibodies. Furthermore, the rateof grade of the exponential function also increases with theincreasing variant value. It will redound to accelerate theconvergence during nal iterations.

    4.1.2. Main loop of SRCPA

    The simple SRCPA can be described as follows:

    242 M. Gong et al. / Progress in NaAlgorithm 2. Secondary response clonal programmingalgorithm (SRCPA)ness assignment strategy, CSO and SRO, for solving

    SRCMOA incorporates the Pareto strength-based t-

    multi-objective optimization problems. The simple SRC-Step 1: Initialization: Randomly generate the initial anti-body populationA0 fa10; a20; . . . ; an0g 2 InRandomly generate the SPM0 fm10;m20; . . . ;ms0g 2 I sCalculate the anity of all antibodies in A(0)and M(0), k:=0.

    Step 2: CSO:

    Step 2.1: Clonal proliferation operation: Y(k) Get byapplying T CP to A(k).

    Step 2.2: Anity maturation operation: Get Z(k) byapplying TAM to Y(k).

    Step 2.3: Evaluation: Calculate the anity of all anti-bodies of Z(k).

    Step 2.4: Clonal selection operation: Get A(k+1) byapplying T CS to Zk [ Ak.

    Step 3: SRO:

    Step 3.1: Update operation of SP: Get the new SPM(k+1) by update operation of M(k).

    Step 3.2: Activation operation of SP: Update A(k+1)by activation operation of M(k+1).

    Step 4: Termination test: If a stopping condition is satis-ed, stop the algorithm; otherwise, k:=k+1, goto Step 2.

    In SRCPA, we design the hypermutation and receptorediting of CSO as follows.

    For the antibody yijk; i 1; 2; . . . ; n; j 1; 2; . . . qiin the antibody population Y(k), replace its certain num-bers by a random integer between 0 and 9. For example,if a three-dimensional optimization problem is to be solved,the antibody yij(k) = (1.233567,12.334567,0.123356), andthe second variables fth number is selected to mutate.Then we can randomly generate an integer between 0 and9 to take the place of the number 4 in bold.

    The receptor editing is designed similar to uniform recom-bination. Let a a1; a2; . . . ; aD be the antibody to be edi-ted. Then randomly select a member m m1;m2; . . . ;mDfrom the current SP M(k). Randomly generate a Booleanvector r r1; r2; . . . ; rD, where ri 2 f0; 1g; i 1; 2; . . . ;D.Then the new antibody a0 is expressed as a0 fa01; a02; . . . ;a0Dg, where

    a0i ai if ri 0mi if ri 1

    ; i 1; 2; . . . ;D 15

    Furthermore, the hypermutation and receptor editing inCSO is applied to each antibody of the antibody popula-tion Y(k) with probability 1.

    4.2. Secondary response clonal multi-objective optimization

    algorithm

    al Science 19 (2009) 237253MOA can be written as follows:

  • Algorithm 3. Secondary response clonal multi-objectiveoptimization algorithm (SRCMOA).

    Step 1: Initialization: Randomly generate the initialantibody population

    A(0) = {a1(0), a2(0), ... , an(0)} 2 InRandomly generate the SPM(0) = {m1(0), m1(0), ... , ms0)} 2 IsCalculate the anity of all antibodies in A0and M0k : 0.

    Step 2: CSO:

    Rbi 0 indicates that the individual bi is not domi-nated by any other antibodies, i.e. a non-dominated indi-vidual, while a high Rbi value means that bi isdominated by many individuals.

    For each individual bi the distances (in objective space) to allindividuals bj e B are calculated and stored in a list. After sortingthe list in increasing order, the sum of two smallest elements givesthe distance sought, denoted by dpi. The density Dbi isdened asDbi 1dpi1. Afterwards, the anity of bi is dened asF bi N Rbi Dbi: 18

    Here, N is added in the anity to make F bi positiveand is maximized in the evaluation. Those individuals with

    used by human breeders, and is a recombination of evo-

    M. Gong et al. / Progress in Natural Science 19 (2009) 237253 243i1 i1f3x

    PDi1Pi

    j1xj2f4x

    PDi1ix

    4i random0; 1

    f5x 1DPD

    i1x4i 16x2i 5xif6x 10D

    PDi1x2i 10 cos2pxi

    f7x PD

    i1xi sinjxijp

    f8x PD

    i1x2i

    4000QD

    i1 cos xip 1Step 2.1: Clonal proliferation operation: Get Yk byapplying T CP to Ak.

    Step 2.2: Anity maturation operation: Get Zk byapplying TAM to Yk.

    Step 2.3: Evaluation: Calculate the anity of all anti-bodies of Zk, Ak and Mk.

    Step 2.4: Clonal selection operation: Get A(k+1) byapplying T CS to Zk [ Ak.

    Step 3: SRO:Step 3.1: Update operation of SP: Get the new SP

    M(k+1) by update operation of Mk.Step 3.2: Activation operation of SP: Update A(k+1)

    by activation operation of M(k+1).Step 4: Termination test: If a stopping condition is satis-

    ed, stop the algorithm; otherwise, k:=k+1, goto Step 2.

    The main dierence between the ows of SRCMOA andSRCPA is the evaluation in Step 1 and Step 2.3.

    In SRCMOA, each antibody bieB (in Step 1, B = A(0);in Step 2.3, B Zk [ Ak [Mk is assigned an anityF bi as follows.

    Each individual bi e B is assigned a strength value Sbi,representing the number of individuals it dominates

    Sbi jfbjjj 2 B ^ bi bjgj 16where the symbol corresponds to the Pareto-dominancerelation. Then let

    Rbi Xj

    Sbjjbj 2 B ^ bj bi: 17

    Table 2Benchmark functions for SRCPA.

    Test functions

    f1x PD

    i1x2i

    f2x PD jxij QD jxiji

    f9x 20 exp 0:21D

    q PDi x

    2i

    exp 1D

    PDi1 cos2pxi

    20 elution strategies and genetic algorithms.BGAuses trun-cation selection asperformedbybreeders.This selectionscheme is similar to the (l,k) strategy in evolution strat-egies. The search process of BGA is mainly driven byrecombination, making BGA a genetic algorithm.

    Parameter domain Optimum

    [100,100] 0 (min)[10,10] 0 (min)[100,100] 0 (min)[1.28,1.28] 0 (min)[5,5] 78.33236 (min)[5.12,5.12] 0 (min)[500,500] 412.9829D (min)[600,600] 0 (min)anity between [N 1,N] are considered to be non-domi-nated individuals. We adopt binary representation and aconstant clonal size in SRCMOA. The hypermutationand receptor editing in CSO are designed as the conven-tional mutation operation, e.g. bit-inverse mutation.

    5. Evaluation of SRCPAs eectiveness

    Experiments were carried out to evaluate the perfor-mance of SRCPA by comparing with some existing EAsusing nine benchmark functions gleaned from the litera-tures. The test function, parameter domain and global opti-mum are listed in Table 2. In this study, when the variabledimension is between 10 and 1000, the functions are calledhigh-dimensional functions. When the variable dimensionis larger than 1000, the functions are called superhigh-dimensional functions.

    Before comparing SRCPA with BGA [28], AEA [29],OGA/Q [30], IEA [31], and IMCPA [32] in the followingexperiments, we rst give a brief description of the vealgorithms.

    (1) BGA: It is based on an articial selection similar to that[30,30] 0 (min)

  • (2) AEA: This is a modied version of BGA. Besides thenew recombination operator and the mutation oper-ator, each individual of AEA is coded as a vectorwith components all in the unit interval, and inver-sion is applied with some probability to the parentsbefore recombination is performed.

    (3) OGA/Q: This is a modied version of the classicalgenetic algorithm (CGA). It is the same as CGA, exceptthat it uses the orthogonal design to generate the initialpopulation and the ospring of the crossover operator.

    (4) IEA: This is also an evolutionary algorithm based on theorthogonal design. IEA uses a novel intelligent gene col-lector (IGC) in recombination. Based on the orthogonalexperimental design, IGC uses a divide-and-conquerapproach, which includes adaptively dividing two indi-viduals of parents into N pairs of gene segments, eco-nomically identifying the potentially better one of twogene segments of each pair, and systematically obtaininga potentially good approximation to the best one of all

    IEA and OGA/Q are obtained from Refs. [332]. The popu-lation size of SRCPA is 10, the clonal size is 15, the size of SPis 5, and the activation ratio is 50%. The termination criterionof SRCPA is that the given optimal value is reached or thebest solution cannot be further improved in successive 30 iter-ations, which is the same as that in Ref. [31]. Each result ofSRCPA is obtained from 50 independent runs. All the testfunctions can be categorized into three classes by carefullyexamining the simulation results in Table 3 as follows:

    (1) Functions f1, f2, f3, f6, and f8: The globally optimal solu-tion exists in the orthogonal array-based initial popula-tions generated using the methods of OGA/Q and IEA,respectively. Therefore, OGA/Q and IEA can obtain theoptimal solution with a zero standard deviation. Owingto the termination criterion, the solution quality ofSRCPA is worse than that of OGA/Q and IEA, butSRCPA can reduce the number of function evaluationssuciently. SRCPA performs better than IMCPA in

    eantand

    CP

    592.674224.472206.862576.74078.3

    .339

    763.678125

    .034

    656.743

    244 M. Gong et al. / Progress in Natural Science 19 (2009) 237253but the practical handles have some dierences, andthey have dierent functions: the antibody popula-tion is the basic population acting on antigens andemphasizes the global search, while SP emphasizesthe self-adaptive local search in the dening space.

    5.1. Performance comparisons on high-dimensional functions

    Table 3 shows the statistical results of SRCPA, IMCPA,IEA and OGA/Q, where the reported results of IMCPA,

    Table 3Performance comparison of SRCPA, IMCPA, IEA and OGA/Q.

    f D Mean number of function evaluations M(s

    SRCPA IMCPA IEA OGA/Q SR

    f1 30 1,238 2,173 16,840 112,559 1.(2

    f2 30 1,304 2,729 8,420 112,612 2.(3

    f3 30 2,636 4,196 16,840 112,576 2.(2

    f4 30 8,244 22,402 8,420 112,652 2.(1

    f5 100 2,752 5,806 184,711 245,930 (2

    f6 30 1,295 2,087 8,420 224,710 7.(1

    f7 30 2,083 4,973 54,706 302,166 (1

    f8 30 1,624 3,516 16,840 134,000 1.(2combinations using at most 2N tness evaluations.(5) IMCPA: IMCPA is a modied clonal selection algo-

    rithm for solving the global single-objective optimiza-tion problem. IMCPA runs at two populations,namely antibody population and SP, side-by-side.The main operations on two populations are similar,f9 30 1,993 4,295 8,420 112,421 1.882(2.490terms of both solution quality and computational cost.(2) Functions f4 and f9: The globally optimal solution

    exists in the orthogonal array-based initial populationsgenerated using the method of IEA, but not in that ofOGA/Q. Therefore, IEA can obtain the optimal solu-tion with a zero standard deviation. The solution qual-ity of SRCPA is worse than that of IEA, but SRCPAperforms better than IMCPA and OGA/Q in termsof both solution quality and computational cost.

    (3) Functions f5 and f7: The globally optimal solutiondoes not exist in either the orthogonal array-basedinitial populations generated using the method ofIEA or that of OGA/Q. SRCPA performs much bet-ter than OGA/Q, IEA, and IMCPA, not only interms of the solution quality but also in terms of com-putational cost.

    function valueard deviation)

    A IMCPA IEA OGA/Q

    031 107927 107)

    2.038235 107(3.170165 107)

    0

    (0)

    0

    (0)

    406 109134 109)

    2.443007 109(3.669766 109)

    0

    (0)

    0

    (0)

    835 1010637 1010)

    2.575336 1010(3.175653 1010)

    0

    (0)

    0

    (0)

    724 104283 104)

    4.381959 104(2.897946 104)

    0

    (0)

    6.301 103(4.069 104)

    3234

    041 108)78.33119(2.902509 104)

    78.33232(6.353 106)

    78.3000296(6.288 103)

    717 1012557 1011)

    1.158469 1011(2.334151 1011)

    0

    (0)

    0

    (0)

    69.49

    761 106)12569.49(5.946202 106)

    12569.49(6.079 103)

    12569.4537(6.447 104)

    5 1015451 1015)

    1.931788 1015(2.773929 1015)

    0

    (0)

    0

    (0)16 15 169 10

    513 1017)3.019807 10(3.432247 1015)

    0

    (0)

    4.440 10(3.989 1017)

  • OGA/Q and IEA apply the orthogonal design to gener-ate an initial population of points that are scattered uni-formly over the feasible solution space, so that thealgorithm can evenly scan the search space once to locategood points for further exploration in subsequent itera-tions. They also use the recombination methods based onthe orthogonal design. Therefore, OGA/Q and IEA canobtain an accurate solution, but a large number of functionevaluations are needed.

    SRCPA reproduces individuals and selects theirimproved maturated progenies after the anity maturationprocess, thus single individuals will be optimized locallyand the newcomers yield a broader exploration of thesearch space. The update operation of SP pays more atten-tion to maintain the population diversity, and the activa-tion operation of SP accelerates the convergence speed.Therefore, SRCPA can obtain a satisfying solution, eventhough it is not the optimal solution with a much smallernumber of function evaluations.

    Because the size of the search space and the number oflocal minima increase with the problem dimension, thehigher the dimension is, the more dicult the problem is.Therefore, the following experiment studies the perfor-mance of SRCPA on functions with 201000 dimensions.

    M. Gong et al. / Progress in NaturThe experiment results of SRCPA, IMCPA, AEA andBGA optimizing the functions f6, f7, f8, and f9 with variousdimensions are shown in Table 4. The termination criterionof SRCPA is that one of the objectives, j fbest fmin j< ej fmin j or j fbest j< e if fmin 0, is reached. We usee 101 for f6, e 104 for f7 and e 103 for f8 and f9,

    Table 4Performance comparisons of SRCPA, IMCPA, AEA and BGA.a

    Functions D Mean number of function evaluations

    SRCPA IMCPA AEA BGA

    f6 20 693 1469 1247 3608100 2453 4988 4798 25040200 3877 5747 10370 52948400 6455 12563 23588 1126341000 11592 24408 46024 337570

    f7 20 1197 3939 1603 16100100 3789 11896 5106 92000200 6990 16085 8158 248000400 11722 26072 13822 6998031000 22375 60720 23687 /

    f8 20 1035 2421 3581 40023100 3181 6713 17228 307625200 5037 8460 36760 707855400 8564 15365 61975 16009201000 15138 30906 97600 /

    f9 20 803 1776 7040 197420100 2943 5784 22710 53860200 4859 9728 43527 107800400 6092 13915 78216 220820

    1000 13362 26787 160940 548306

    a) / denotes that relevant data were not found.which is the same as that in BGA, AEA and IMCPA. Eachresult of SRCPA is obtained from 30 independent runs.The reported results of IMCPA, AEA, and BGA areobtained from their references.

    As can be seen from Table 4, for all the four functions,the number of function evaluations of SRCPA is muchsmaller than those of IMCPA, AEA and BGA at all dimen-sions. Therefore, SRCPA obtains satisfying solutions at alower computational cost than BGA, AEA and IMCPA,and it displays a good performance in solving large param-eter optimization problems.

    5.2. Performance comparisons on superhigh-dimensional

    functions

    In order to further test the scalability of SRCPA alongthe problem dimension further, we use SRCPA and otheralgorithms to optimize f6, f7, f8 and f9 with higher dimen-sions, respectively.

    With the same parameters as used in Section 5.1, theproblem dimension is increased from 2000 to 30,000. Thetermination criterion of SRCPA is that one of the objec-tives, j fbest fmin j< e j fmin j or j fbest j< e if fmin 0, isreached. We use e 101 for f6, e 104 for f7 ande 103 for f8 and f9. Table 5 shows the mean number offunction evaluations of SRCPA and IMCPA obtainedfrom 30 independent runs. AEA and BGA cannot obtainsatisfying solution in 12 h, so we do not list them in thetable. When N > 10,000, IMCPA also cannot attain satisfy-ing solution in 12 h which denoted by / in the table. Thesimulation results show that SRCPA also has a fast conver-gence speed for superhigh-dimensional functions, which isimpossible for some other algorithms.

    Further analysis indicates that, under the same condi-tion, the numbers of function evaluations are approxi-mately linear with the problem dimension. The numbersof function evaluations for f6, f7, f8, f9 versus the problemdimension are shown in Fig. 3. It can be seen that whenthe variable dimension added one, the mean number offunction evaluations increases no more than 13.

    5.3. Sensitivity in relation to parameters

    In order to study the eects of the main parameters onalgorithm performance when running SRCPA, the follow-ing experiments consider the problems of f6, f7, f8, f9 to besolved by SRCPA with various antibody population size,clonal size, secondary pool size, and activation ratio.

    5.3.1. Sensitivity in relation to antibody population size and

    clonal size

    The experimental results of SRCPA on optimizing thefunctions f6, f7, f8, f9 with the antibody population size nincreasing from 5 to 70 and the ratio between clonal sizeand population size, i.e. N c=n, called clonal scale, increas-

    al Science 19 (2009) 237253 245ing from 1 to 20 are shown in Fig. 4. The values of theother parameters are as follows: the SP size is 5, the activa-

  • The results in Fig. 4 show that, antibody population sizeand clonal size have large eects on the performance ofSRCPA. The number of function evaluations increasesalmost linearly with the increasing antibody populationsize or clonal scale. After approximating the number offunction evaluations by Ol n N c, we nd that whenthe antibody population size increases by one the averagenumbers of function evaluations on optimizing functionsf6, f7, f8, f9 increase by about lf6 43:7067, lf7 73:4533, lf8 57:96673, and lf9 57:96673, respectively.When the clonal size increases by one, the average numbersof function evaluations on optimizing functions f6, f7, f8, f9increase about lf6 16:3847, lf7 21:5700, lf8 20:6946,and lf9 18:9660, respectively. Thus, increasing the clonal

    Table 5Performance comparison of SRCPA and IMCPA on superhigh-dimensional functions.

    D f6 f7 f8 f9

    SRCPA IMCPA SRCPA IMCPA SRCPA IMCPA SRCPA IMCPA

    2000 21346 37879 42794 90001 23153 43003 19480 418805000 32590 87245 64352 136869 36088 125847 28858 8312510000 55730 143700 123600 235840 58698 147037 80716 13848720000 103546 / 196578 / 143654 / 118078 /30000 151990 / 269402 / 213096 / 145719 /

    Fig. 3. The mean number of function evaluations versus problemdimensions.

    246 M. Gong et al. / Progress in Natural Science 19 (2009) 237253tion ratio is 50%, and the problem dimension is 20. The ter-mination criterion is the same as that in Section 5.2. Thedata are the statistical results obtained from 50 indepen-

    dent runs.

    Fig. 4. SRCPA sensitivity in relation to antibody populsize will result in a less increment in the number of functionevaluations than increasing the antibody population size.In addition, the bigger clonal size also helps to extend thesearching scope. However, we also found in experimentthat the larger antibody size improves the diversity ofpopulation.ation size and clonal size. (a) f6; (b) f7; (c) f8; (d) f9.

  • 5.3.2. Sensitivity in relation to SP size and activation ratio

    The experimental results of SRCPA on optimizing func-tions f6, f7, f8, f9 with the secondary pool size s increasingfrom 1 to 10 and activation ratio T% increasing from 0to 1 are shown in Fig. 5. The values of the other parametersare as follows: the antibody population size is 10, the clonalsize is 15, and the problem dimension is 20. The termina-tion criterion is the same as that in Section 5.2. The dataare the statistical results obtained from 50 independentruns.

    From the above results, we conclude that the eect of SPsize on the number of function evaluations is weaker thanthose of antibody population size and clonal size. The lar-ger the SP size is, the fewer the number of function evalu-ations is. The activation ratio has a rather weak inuenceon the performance, especially when the variable dimen-sion is low. Generally, when T% is set around 50%, thenumber of function evaluations is rather few.

    6. Evaluation of SRCMOAs eectiveness

    In this section, we compare the performance of SRC-MOA with NSGA-II, SPEA and PAES through eightwell-known multi-objective test functions. In order to dem-onstrate the workings of these methods, we give both the

    6.1. Test functions and performance metrics

    At rst, we describe the eight test functions in this study.Veldhuizen have cited most of these test functions in Ref.[41], and here we choose three of them, i.e. Fonsecas sec-ond problem, Polonis problem and Schaers rst prob-lem, and call them FON, POL and SCH, respectively.We also choose ve test problems which followed theguidelines suggested by Deb [42] and then designed by Zil-ter, Deb and Thiele in 2000 [43]. We call them ZDT1,ZDT2, ZDT3, ZDT4, ZDT6 here. Table 6 summarizesthe numbers of variables, the variable domain, the descrip-tion of objective functions of these test problems and thecomments about their Pareto-optimal fronts.

    The goal of the multi-objective optimization algorithmsis to nd a Pareto-optimal set or approximate it. On onehand, the smaller the distance to the Pareto-optimal set,the better it would be; on the other hand, the more diversethe achieved non-dominated solution, the better it wouldbe. Zitzler et al. [44] suggested that for a k-objective optimi-zation problem, at least k performances are needed to com-pare two or more solutions and an innite number ofmetrics to compare two or more sets of solutions. Severalwidely used metrics have been proposed. Since there hasnot been such a performance metric that can deal with both

    M. Gong et al. / Progress in Natural Science 19 (2009) 237253 247statistical results of two performance metrics, the conver-gence metric and the diversity metric. In the last part of thissection, we also describe the additional experiments toshow the behavior of SRCMOA with dierent parametersettings.Fig. 5. SRCPA sensitivity in relation to SP size atasks adequately, we adopt two performance metrics sug-gested by Deb [34], the convergence metric c and the diver-sity metric D, which are more direct in evaluating the abovetwo goals in a solution set and make a complement in thewhole performance comparison.nd activation ratio. (a) f6; (b) f7; (c) f8; (d) f9.

  • Table 6Test problems for SRCMOA.

    Problem n Variable domain Objective function Comments

    FON 3 4; 4f1x 1 exp

    Xni1

    xi 1np 2 !

    f2x 1 exp Xni1

    xi 1np 2 !

    Non-convex

    POL 2 p;p f1x 1 A1 B12 A2 B22f2x x 32 y 12A1 0:5 sin 1 2 cos 1 sin 2 1:5 cos 2;A2 1:5 sin 1 cos 1 2 sin 2 0:5 cos 2;B1 0:5 sin x 2 cos x sin y 1:5 cos y;B2 1:5 sin x cos x 2 sin y 0:5 cos y

    Non-convex disconnected

    SCH 1 103; 103 f1x x2f2x x 22

    Convex

    ZDT1 30 0; 1 f1x x1f2x gx 1

    x1=gx

    ph i

    gx 1 9Xni2

    xi

    !,n 1

    Convex

    ZDT2 30 0; 1 f1x x1f2x gx 1 x1=gx2

    h i

    gx 1 9Xni2

    xi

    !,n 1

    Non-convex

    ZDT3 30 0; 1 f1x x1f2x gx 1

    x1=gx

    p x1gx sin10pxi

    gx 1 9Xni2

    xi

    !,n 1

    Convex disconnected

    ZDT4 10 0; 1 f1x x1f2x gx1

    x1=gx

    p

    gx 1 10n 1 Xni2

    x2i 10 cos4pxi

    Non-convex

    ZDT6 10 0; 1 f1x 1 exp4x1 sin64px1f2x gx1 f1x=gx2

    gx 1 9Xni2

    xi=n 10:25

    Non-convex non-uniformlyspaced

    248 M. Gong et al. / Progress in Natural Science 19 (2009) 237253

  • 6.2. Performance comparisons with NSGA-II, SPEA and

    PAES

    In the following experiments, we performed 10 indepen-dent runs for each algorithm on each test function. The ter-mination criterion is to run 250 generations. For NSGA-II,SPEA and PAES, we directly use the results in Ref. [34].Both binary coded and real-coded NSGA-II are able toconverge to the Pareto-optimal front on these eight testproblems, but real-coded NSGA-II is able to nd a betterspread of solutions than binary-coded NGSA-II on mostproblems. Therefore, we use real-coded NSGA-II as thecompared algorithm here. For SRCMOA, we have somecommon parameter settings the same as the other com-pared algorithms, including the antibody population sizeof 100. Each decision variable is also binary-coded with30 bits. The special parameters concerned with the CSOand SRO in IFCOMA are summarized as follows: the sizeof SP size is 100, the clonal scale is 3, and the activationratio is 10%.

    The nal non-dominated solutions obtained by IFCO-MA at the end of 250 generation are reported. Tables 7and 8 show the direct comparison of SRCMOA withNSGA-II, SPEA and PAES based on the two performance

    metrics. The mean and the variance of the results are pre-sented for each test problem.

    As can be seen from Tables 7 and 8 SRCMOA can con-verge to the true Pareto-optimal fronts with a good distri-bution for most test problems except ZDT4, for whichSRCMOA does not nd a well-converged non-dominatedsolution set. The mean values of diversity metric obtainedby SRCMOA are the best for all the test problems,although the mean values of convergence metric obtainedby SRCMOA are not always the best. SRCMOA can nda better convergence in ZDT3, while NSGA-II does betterin FON, POL and ZDT4, SPEA in ZDT1 and ZDT2, andPAES in SCH and ZDT6. However, the dierence in themean values of c between SRCMOA and the algorithmswhich can nd better convergence is small, as well as thevariance of c in the 10 runs. Therefore, Tables 7 and 8 indi-cate the eective convergence ability in the objective spaceand the reasonable distribution obtained by SRCMOA forsolving most test problems.

    6.3. Sensitivity in relation to parameters

    In this section, we have some additional experimentsbased on various secondary pool size, antibody population

    Table 7Statistical values of the convergence metric c.

    Algorithm FON POL SCH ZDT1 ZDT2 ZDT3 ZDT4 ZDT6

    SRCMOA

    Mean 0.021221 0.026850 0.001618 0.022175 0.026488 0.014208 5.739007 0.2632320000

    03340047

    0017

    0000

    08200086

    T1

    1894

    0018

    39030018

    78450044

    M. Gong et al. / Progress in Natural Science 19 (2009) 237253 249Variance 0.000004 0.000002 0 0.

    NSGA-II

    Mean 0.001391 0.015553 0.003391 0.Variance 0 0.000001 0 0.

    SPEA

    Mean 0.125692 0.037812 0.003403 0.Variance 0.000038 0.000088 0 0.

    PAES

    Mean 0.151263 0.030864 0.001313 0.Variance 0.000905 0.000431 0.000003 0.

    Table 8Statistical values of the diversity metric D.

    Algorithm FON POL SCH ZD

    SRCMOA

    Mean 0.137296 0.012475 0.146967 0.Variance 0.001519 0.000001 0.000715 0.

    NSGA-II

    Mean 0.378065 0.452150 0.477899 0.Variance 0.000639 0.002868 0.003471 0.

    SPEA

    Mean 0.792352 0.972783 1.021110 0.Variance 0.005546 0.008475 0.004372 0.

    PAESMean 1.162528 1.020007 1.063288 1.2297Variance 0.008945 0 0.002868 0.004824 0.000048 0.000047 5.635990 0.001702

    82 0.072931 0.114500 0.513053 0.29656450 0.031689 0.007940 0.118460 0.013135

    99 0.001339 0.047517 7.340299 0.22113810 0 0.000047 6.572516 0.000449

    85 0.126276 0.023872 0.854816 0.08546979 0.036877 0.000010 0.527238 0.006664

    ZDT2 ZDT3 ZDT4 ZDT6

    93 0.252527 0.126205 0.310665 0.311588

    03 0.002000 0.000576 0.018005 0.007565

    07 0.430776 0.738540 0.702612 0.66802576 0.004721 0.019706 0.064648 0.009923

    25 0.755148 0.672938 0.798463 0.84938940 0.004521 0.003587 0.014616 0.00271394 1.165942 0.789920 0.870458 1.15305239 0.007682 0.001653 0.101399 0.003916

  • size, clonal scale, and activation ratio. We made theseeorts to nd the eects of the main parameters on algo-rithm performance when running SRCMOA, in order tochoose a group of more reasonable parameters forSRCMOA.

    6.3.1. Sensitivity in relation to SP size and antibody

    population size

    First, we keep other parameters the same as before, butincrease the secondary pool size to 150. Table 9 presentsthe statistical results of 10 independent runs based on theconvergence metric and diversity metric for the eight prob-lems. We can see that after increasing the secondary poolsize, SRCMOA could convergence to the true Pareto-opti-mal front better and obtain a better distribution on mosttest problems.

    In Table 9, SRCMOA converges better on the test prob-lems FON, ZDT1, ZDT2 and ZDT3. For POL and SCH,the mean values of c are similar to the values obtained withthe secondary pool size 100. But for ZDT4 and ZDT6, itseems dicult for SRCMOA to have some improvementon the convergence by simply increasing the secondarypool size. Although SRCMOA behaves dierently on the

    300 and the secondary pool size of 200. For them, 10 inde-pendent runs are performed with the other parameters, thesame as those in Section 6.2. Fig. 6 shows the distributionof the convergence metric with dierent parameter settings.We can see that when the antibody population size and thesecondary pool size are increased, the performance ofSRCMOA on ZDT4 and ZDT6 is greatly improved. A lar-ger population size may provide more opportunities for thealgorithm to nd global optimal solutions and a larger sec-ondary pool size may maintain a better diversity, whichalso helps the algorithm to jump out the local optimalsolutions.

    6.3.2. Sensitivity in relation to activation ratio and clonalscale

    Experiments are designed in this section to nd aresponsible activation ratio T% for SRCMOA to have abetter performance on most test problems. The activationratio T% is assigned from 10% to 50% while keeping otherparameters the same as those in Section 6.2. Fig. 7 showsthe distribution of the convergence metric on POL andZDT1.

    Fig. 7 shows that we can get the better evaluation values

    T1

    173000

    919006

    (a)

    250 M. Gong et al. / Progress in Natural Science 19 (2009) 237253Fig. 6. The distributions of the convergence metric for ZDT4 and ZDT6.above test problems based on the convergence metric c,the distribution of the non-dominated solutions isimproved except ZDT4. So the ability to converge to thetrue Pareto-optimal front and nd a diverse set is not a dif-culty for SRCMOA in solving the above test problemsexcept ZDT4 and ZDT6.

    Therefore, we do additional experiments on ZDT4 andZDT6 with a larger antibody population size from 150 to

    Table 9Statistical values of the convergence metric c and diversity metric D.

    Metric FON POL SCH ZD

    cMean 0.020419 0.027437 0.001657 0.0Variance 0.000004 0.000003 0 0.0

    DMean 0.113090 0.012107 0.133890 0.1Variance 0.000927 0 0.000777 0.0situations, from left to right are (150, 200), (200, 200), (250, 200), and (300, 200)is the secondary pool size.ZDT4; (b) ZDT6. The values of horizontal coordinate correspond to fourwhen T% is set to be 10%. When T% becomes higher, thevalues of the convergence metric c rise too, which indicatesthat the quality of non-dominated solutions obtained bySRCMOA declines. Since the secondary pool stores thecurrent non-dominated solutions, a too high activationratio in the activation operation of SP will lead to a lowdiversity in the antibody population and make SRCMOAeasy to drop in the local optimal solutions.

    ZDT2 ZDT3 ZDT4 ZDT6

    63 0.018886 0.008582 6.398918 0.31832703 0.000017 0.000002 3.003854 0.002116

    64 0.215481 0.103501 0.484570 0.28736501 0.001137 0.000104 0.034426 0.009445, where the rst number is the antibody population size and the second one

  • Fig. 8 shows the distribution of the convergence metricof SRCMOA with dierent clonal scales obtained from10 independent runs on ZDT1, ZDT2, ZDT3 and ZDT6.Clonal selection operation reproduces antibodies andselects their improved progenies after the anity matura-tion process. The higher clonal scale allows a more diversepopulation around each individual, both the convergenceand diversity would be improved obviously. However, thenumbers of function evaluation increase with the increas-

    a greedy search, which reproduces individuals and selectstheir improved maturated progenies after the anity mat-uration process, and so single individuals will be optimizedlocally and the newcomers yield a broader exploration ofthe search space. SRO copies certain antibodies to the sec-ondary pool whose members do not participate in CSO.Furthermore, the update of the secondary pool pays moreattention to maintain the population diversity. SRCPAadopts decimal representation rather than binary represen-

    Fig. 7. The distributions of the convergence metric for POL and ZDT1. (a) POL; (b) ZDT1. The values of horizontal coordinate corresponding todierent values of activation ratio, from left to right are: 10%, 20%, 30%, 40%, and 50%.

    M. Gong et al. / Progress in Natural Science 19 (2009) 237253 251ing clonal scale. For a roughly equivalent number of func-tion evaluation with the other algorithms, the clonal scaleis set to be 3 in the normal tests.

    7. Conclusions

    In this study, we have proposed two novel articialimmune system algorithms SRCPA and SRCMOA byusing two immune operators CSO and SRO to solve singleand multi-objective optimization problems. CSO performsFig. 8. The distributions of the convergence metric for ZDT1, ZDT2, ZDT3horizontal coordinate corresponding to dierent values of clonal scale, from ltation and uses special mutation and recombination meth-ods designed for large parameter optimization problems.The experimental results indicate that SRCPA convergesfaster than some existing evolutionary algorithms such asBGA, AEA, OGA/Q, IEA and IMCPA, in terms of solvingcomplex problems such as high-dimensional functionoptimizations. SRCMOA combines the CSO, SRO andprevious MOEAs techniques to get more various non-dominated solutions. The numerical experiments on thewell-known test problems show that SRCMOA has a goodand ZDT6. (a) ZDT1; (b) ZDT2; (c) ZDT3; (d) ZDT6. The values ofeft to right are 2, 3, 4, and 5.

  • turperformance in converging to approximate Pareto-optimalfronts with wide distributions on most of the problems.

    Acknowledgements

    This work was supported by the National NaturalScience Foundation of China (Grant Nos. 60703107 and60703108), the National High Technology Research andDevelopment Program (863 Program) of China (GrantNos. 2006AA01Z107 and 2008AA12Z2475853), theNational Basic Research Program (973 Program) of China(Grant No. 2006CB705700), and the Program for CheungKong Scholars and Innovative Research Team in Univer-sity (Grant No. IRT0645).

    References

    [1] Ishida Y. Fully distributed diagnosis by PDP learning algorithm:towards immune network PDP model. In: Proceedings of interna-tional joint conference on neural networks, San Diego, 1990:77782.

    [2] Forrest S, Perelson AS, Allen L, et al. Self-nonself discrimination in acomputer. In: Proceedings of the IEEE symposium on research insecurity and privacy. Los Alamitos, CA: IEEE Computer SocietyPress, 1994:20212.

    [3] Dasgupta D. Articial immune systems and their applications. Berlin,Heidelberg, New York: Springer-Verlag; 1999.

    [4] Gonzalez F, Dasgupta D, Kozma R. Combining negative selectionand classication techniques for anomaly detection. In: Proceedingsof the special sessions on articial immune systems in congress onevolutionary computation, 2002 IEEE world congress on computa-tional intelligence. Honolulu, Hawaii, May 2002.

    [5] Carter JH. The immune system as a model for pattern recognitionand classication. J Am Med Inform Assoc 2000;7(3):2841.

    [6] de Castro LN, Von Zuben FJ. The clonal selection algorithm withengineering applications. In: Proceedings of GECCO00, workshopon articial immune systems and their applications, 2000:367.

    [7] Tarakanov A, Skormin V. Pattern recognition by immunocomputing.In: Proceedings of the special sessions on articial immune systems.In: Congress on evolutionary computation, 2002 IEEE worldcongress on computational intelligence. Honolulu, Hawaii, May2002.

    [8] Garca-Pedrajas N, Fyfe C. Immune network based ensembles.Neurocomputing 2007;70(79):115566.

    [9] Hart E, Ross P. The evolution and analysis of a potential antibodylibrary for use in job-shop scheduling. New ideas in optimiza-tion. Columbus: McGraw-Hill; 1999, p. 185202.

    [10] Coello Coello CA, Rivera DC, Cortes NC. Use of anarticial immune system for job shop scheduling. In:Proceedings of the second international conference on articialimmune systems, September 13, 2003. Napier University,Edinburgh, UK, 2003.

    [11] Jiao LC, Wang L. A novel genetic algorithm based on immunity.IEEE Trans Syst Man Cybernet Part A 2000;30(5):55261.

    [12] de Castro LN, Von Zuben FJ. Learning and optimization using theclonal selection principle. IEEE Trans Evol Comput2002;6(3):23951.

    [13] Gong MG, Jiao LC, Du HF, et al. Multi-objective immune algorithmwith pareto-optimal neighbor-based selection. Evol Comput2008;16(2):22555.

    [14] Hajela P, Yoo J, Lee J. GA based simulation of immunenetworks applications in structural optimization. Eng Optimiza-tion 1997;29(14):13149.

    [15] Gong MG, Du HF, Jiao LC. Optimal approximation of linear

    252 M. Gong et al. / Progress in Nasystems by articial immune response. Sci Chin: Ser F Inform Sci2006;49(1):6379.[16] de Castro LN, Timmis JI. Articial immune systems: A newcomputational intelligence approach. London: Springer-Verlag; 2002.

    [17] Garrett SM. How do we evaluate articial immune systems. EvolComput 2005;13(2):14578.

    [18] Burnet FM. The clonal selection theory of acquired immunity. Cam-bridge: Cambridge University Press; 1959.

    [19] de Castro LN, Timmis J. An articial immune network formultimodal function optimization. In: Proceedings of the 2002 IEEEcongress on evolutionary computation, 2002:699704.

    [20] Kelsey J, Timmis J. Immune inspired somatic contiguous hypermu-tation for function optimisation. In: Proceedings of the genetic andevolutionary computation conference, 2003:20718.

    [21] Garrett SM. Parameter-free, adaptive clonal selection. In: Proceed-ings of the 2004 IEEE congress on evolutionary computing. Portland,OR, 2004:10528.

    [22] Cutello V, Nicosia G, Pavone M. Exploring the capability of immunealgorithms: a characterization of hypermutation operators. In:Proceedings of the third international conference on articial immunesystems. Catania, Italy, September 1316, 2004, Lecture Notes inComputer Science 2004;3239:26376.

    [23] Cutello V, Narzisi G, Nicosia G, et al. Clonal selection algorithms: acomparative case study using eective mutation potentials. In:Proceedings of the Fourth International Conference on ArticialImmune Systems, Ban, Canada, August 1417, 2005, Lecture Notesin Computer Science 2005;3627:1328.

    [24] Coello Coello CA, Cruz Cortss N. An approach to solve multi-objective optimization problems based on an articial immunesystem. In: Proceedings of the First International Conference onArticial Immune Systems, University of Kent at Canterbury, UK,September 911, 2002, 21221.

    [25] Coello Coello CA, Cruz Cortss N. Solving multi-objective optimiza-tion problems using an articial immune system. Genet ProgEvolvable Mach 2005;6:16390.

    [26] Freschi F, Repetto M. Multi-objective optimization by a modiedarticial immune system algorithm. In: Proceedings of the fourthinternational conference on articial immune systems, Ban, Canada.August 1417, 2005, Lecture Notes in Computer Science2005;3627:24861.

    [27] Cutello V, Narzisi G, Nicosia G. A class of pareto archived evolutionstrategy algorithms using immune inspired operators for Ab-Initioprotein structure prediction. In: Proceedings of the third europeanworkshop on evolutionary computation and bioinformatics, Evo-Workshops 2005, Lausanne, Switzerland, 2005. Lecture Notes inComputer Science 2005;3449:5463.

    [28] Muhlenbein H, Dirk S. Predictive models for the breeder geneticalgorithm. Evol Comput 1993;1(1):2549.

    [29] Pan ZJ, Kang LS. An adaptive evolutionary algorithms for numericaloptimization. In: Simulated evolution and earning. Berlin: Springer-Verlag, 1997:2734.

    [30] Leung YW, Wang YP. An orthogonal genetic algorithm withquantization for global numerical optimization. IEEE Trans EvolComput 2001;5(1):4153.

    [31] Ho SY, Shu LS, Chen JH. Intelligent evolutionary algorithms forlarge parameter optimization problems. IEEE Trans Evol Comput2004;8(6):52240.

    [32] Du HF, Gong MG, Jiao LC, et al. A novel articial immune systemalgorithm for high-dimensional function numerical optimization.Prog Nat Sci 2005;15(5):46371.

    [33] Zitzler E, Laumanns M, Thiele L. SPEA2: improving the strengthpareto evolutionary algorithm. In: Proceedings of the evolutionarymethods for design, optimization and control with applications toindustrial problems. Athens, Greece, 2002:95100.

    [34] Deb K, Pratap A, Agarwal S, et al. A fast and elitist multi-objectivegenetic algorithm: NSGA-II. IEEE Trans Evol Comput2002;6(2):18297.

    [35] Zitzler E, Thiele L. Multi-objective evolutionary algorithms: a

    al Science 19 (2009) 237253comparative case study and the strength Pareto approach. IEEETransactions on Evolutionary Computation 1999;3(4):25771.

  • [36] Knowles J, Corne D. The pareto archived evolution strategy: a newbaseline algorithm for multi-objective optimization. In: Processing ofthe 1999 congress on evolutionary computation. Piscataway, NJ:IEEE Service Center 1999:397414.

    [37] Akdis CA, Blaser K, Akdis M. Genes of tolerance. Eur J Allergy ClinImmunol 2004;59(9):897913.

    [38] Berek C, Ziegner M. The maturation of the immune response.Immunol Today 1993;14(8):4002.

    [39] George AJT, Gray D. Receptor editing during anity maturation.Immunol Today 1999;20(4):196.

    [40] Beyer HG, Schwefel HP. Evolution strategies: a comprehensiveintroduction. Nat Comput 2002;1(1):352.

    [41] Van Veldhuizen DA. Multi-objective evolutionary algorithms: clas-sication, analyses, and new innovations. PhD thesis, Air University,1999.

    [42] Deb K. Multi-objective genetic algorithms: problem dicultiesand construction of test problems. Evol Comput 1999;7(3):20530.

    [43] Zitzler E, Deb K, Thiele L. Comparison of multi-objectiveevolutionary algorithms: empirical results. Evol Comput 2000;8(2):17395.

    [44] Zitzler E, Thiele L, Laumanns M, et al. Performance assessment ofmulti-objective optimizers: an analysis and review. IEEE Trans EvolComput 2003;7(2):11732.

    M. Gong et al. / Progress in Natural Science 19 (2009) 237253 253

    Immune secondary response and clonal selection inspired optimizersIntroductionImmunology background and termsClonal selection and immune responseTerms in this paper

    Clonal selection operator and secondary response operatorClonal selection operatorClonal proliferation operationAffinity maturation operationClonal selection operation

    Secondary response operatorUpdate operation of SPActivation operation of SP

    SRCPA and SRCMOASecondary response clonal programming algorithmAntibody representation and affinity measureMain loop of SRCPA

    Secondary response clonal multi-objective optimization algorithm

    Evaluation of SRCPAs effectivenessPerformance comparisons on high-dimensional functionsPerformance comparisons on superhigh-dimensional functionsSensitivity in relation to parametersSensitivity in relation to antibody population size and clonal sizeSensitivity in relation to SP size and activation ratio

    Evaluation of SRCMOAs effectivenessTest functions and performance metricsPerformance comparisons with NSGA-II, SPEA and PAESSensitivity in relation to parametersSensitivity in relation to SP size and antibody population sizeSensitivity in relation to activation ratio and clonal scale

    ConclusionsAcknowledgementAcknowledgementsReferences