4
Computer Physics Communications 185 (2014) 1188–1191 Contents lists available at ScienceDirect Computer Physics Communications journal homepage: www.elsevier.com/locate/cpc MCMC 2 (version 1.1): A Monte Carlo code for multiply-charged clusters David A. Bonhommeau a,, Marius Lewerenz b , Marie-Pierre Gaigeot c,d a GSMA, CNRS UMR 7331, Université de Reims Champagne-Ardenne, Campus Moulin de la Housse, BP 1039, 51687 Reims Cedex 2, France b Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 Blvd Descartes, 77454 Marne-la-Vallée, France c LAMBE, CNRS UMR 8587, Université d’Evry Val d’Essonne, Blvd F. Mitterrand, Bât Maupertuis, 91025 Evry, France d Institut Universitaire de France, 103 Blvd St Michel, 75005 Paris, France article info Article history: Received 4 September 2013 Accepted 30 September 2013 Available online 16 October 2013 Keywords: Monte Carlo simulations Coarse-grained models Charged clusters Charged droplets Electrospray ionisation Parallel Tempering Parallel Charging abstract This new version of the MCMC 2 program for modeling the thermodynamic and structural properties of multiply-charged clusters by means of parallel classical Monte Carlo methods provides some enhance- ments and corrections to the earlier version [1]. In particular, histograms for negatively and positively charged particles are separated, parallel Monte Carlo simulations can be performed by attempting ex- changes between all the replica pairs and not only one randomly chosen pair, a new random number generator is supplied, and the contribution of Coulomb repulsion to the total heat capacity is corrected. The main functionalities of the original MCMC 2 code (e.g., potential-energy surfaces and Monte Carlo al- gorithms) have not been modified. New version program summary Program title: MCMC 2 . Catalogue identifier : AENZ_v1_1 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENZ_v1_1.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland. Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html. No. of lines in distributed program, including test data, etc.: 148028 No. of bytes in distributed program, including test data, etc.: 1501936 Distribution format : tar.gz Programming language: Fortran 90 with MPI extensions for parallelization Computers: x86 and IBM platforms Operating system: 1. CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran/ifort(version 13.1.0) + MPICH2 2. CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95/pgf90 + MPICH2 3. Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran + IntelMPI 4. IBM Power 6 4.7 GHz, xlf + PESS (IBM parallel library) Has the code been vectorised or parallelized?: Yes, parallelized using MPI extensions. Number of CPUs used: up to 999. RAM (per CPU core): 10–20 MB. The physical memory needed for the simulation depends on the cluster size, the values indicated are typical for small clusters (N 300 400). Classification: 23 Catalogue identifier of previous version: AENZ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 184 (2013) 873 Nature of problem: We provide a general parallel code to investigate structural and thermodynamic properties of multiply-charged clusters. Solution method: Parallel Monte Carlo methods are implemented for the exploration of the configuration space of multiply-charged clusters. Two parallel Monte Carlo methods were found appropriate to achieve such a goal: the Parallel Tempering method, where replicas of the same cluster at different temperatures Corresponding author. Tel.: +33 0 3 26 91 33 33; fax: +33 0 3 26 91 31 47. E-mail addresses: [email protected] (D.A. Bonhommeau), [email protected] (M.-P. Gaigeot). 0010-4655/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.cpc.2013.09.026

MCMC2 (version 1.1): A Monte Carlo code for multiply-charged clusters

Embed Size (px)

Citation preview

Computer Physics Communications 185 (2014) 1188–1191

Contents lists available at ScienceDirect

Computer Physics Communications

journal homepage: www.elsevier.com/locate/cpc

MCMC2 (version 1.1): A Monte Carlo code for multiply-chargedclustersDavid A. Bonhommeau a,∗, Marius Lewerenz b, Marie-Pierre Gaigeot c,da GSMA, CNRS UMR 7331, Université de Reims Champagne-Ardenne, Campus Moulin de la Housse, BP 1039, 51687 Reims Cedex 2, Franceb Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 Blvd Descartes, 77454 Marne-la-Vallée, Francec LAMBE, CNRS UMR 8587, Université d’Evry Val d’Essonne, Blvd F. Mitterrand, Bât Maupertuis, 91025 Evry, Franced Institut Universitaire de France, 103 Blvd St Michel, 75005 Paris, France

a r t i c l e i n f o

Article history:Received 4 September 2013Accepted 30 September 2013Available online 16 October 2013

Keywords:Monte Carlo simulationsCoarse-grained modelsCharged clustersCharged dropletsElectrospray ionisationParallel TemperingParallel Charging

a b s t r a c t

This new version of the MCMC2 program for modeling the thermodynamic and structural properties ofmultiply-charged clusters by means of parallel classical Monte Carlo methods provides some enhance-ments and corrections to the earlier version [1]. In particular, histograms for negatively and positivelycharged particles are separated, parallel Monte Carlo simulations can be performed by attempting ex-changes between all the replica pairs and not only one randomly chosen pair, a new random numbergenerator is supplied, and the contribution of Coulomb repulsion to the total heat capacity is corrected.The main functionalities of the original MCMC2 code (e.g., potential-energy surfaces and Monte Carlo al-gorithms) have not been modified.

New version program summaryProgram title: MCMC2.Catalogue identifier: AENZ_v1_1Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENZ_v1_1.htmlProgram obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland.Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html.No. of lines in distributed program, including test data, etc.: 148028No. of bytes in distributed program, including test data, etc.: 1501936Distribution format: tar.gzProgramming language: Fortran 90 with MPI extensions for parallelizationComputers: x86 and IBM platformsOperating system:

1. CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran/ifort(version 13.1.0) + MPICH22. CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95/pgf90 + MPICH23. Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran + IntelMPI4. IBM Power 6 4.7 GHz, xlf + PESS (IBM parallel library)

Has the code been vectorised or parallelized?: Yes, parallelized usingMPI extensions. Number of CPUs used:up to 999.RAM (per CPU core): 10–20 MB. The physical memory needed for the simulation depends on the clustersize, the values indicated are typical for small clusters (N ≤ 300 − 400).Classification: 23Catalogue identifier of previous version: AENZ_v1_0Journal reference of previous version: Comput. Phys. Comm. 184 (2013) 873Nature of problem: We provide a general parallel code to investigate structural and thermodynamicproperties of multiply-charged clusters.Solution method: Parallel Monte Carlo methods are implemented for the exploration of the configurationspace of multiply-charged clusters. Two parallel Monte Carlo methods were found appropriate to achievesuch a goal: the Parallel Tempering method, where replicas of the same cluster at different temperatures

∗ Corresponding author. Tel.: +33 0 3 26 91 33 33; fax: +33 0 3 26 91 31 47.E-mail addresses: [email protected] (D.A. Bonhommeau), [email protected] (M.-P. Gaigeot).

0010-4655/$ – see front matter© 2013 Elsevier B.V. All rights reserved.http://dx.doi.org/10.1016/j.cpc.2013.09.026

D.A. Bonhommeau et al. / Computer Physics Communications 185 (2014) 1188–1191 1189

are distributed among different CPUs, and Parallel Charging where replicas (at the same temperature)having different particle charges or numbers of charged particles are distributed on different CPUs.Restrictions: The current version of the code uses Lennard-Jones interactions, as the main cohesiveinteraction between spherical particles, and electrostatic interactions (charge–charge, charge-induceddipole, induced dipole–induced dipole, polarisation). TheMonte Carlo simulations can only be performedin the NVT ensemble in the present code.Unusual features: The Parallel Charging methods, based on the same philosophy as Parallel Temperingbut with particle charges and number of charged particles as parameters instead of temperature, isan interesting new approach to explore energy landscapes. Splitting of the simulations is allowed andaverages are accordingly updated.Running time: The running time depends on the number of Monte Carlo steps, cluster size, and the typeof interactions selected (e.g., polarisation turned on or off, and method used for calculating the induceddipoles). Typically a complete simulation can last from a few tens of minutes or a few hours for smallclusters (N ≤ 100, not including polarisation interactions), to one week for large clusters (N ≥ 1000not including polarisation interactions), and several weeks for large clusters (N ≥ 1000) when includ-ing polarisation interactions. A restart procedure has been implemented that enables a splitting of thesimulation accumulation phase.Reasons for new version: The new version corrects some bugs identified in the previous version. It alsoprovides the user with some new functionalities such as the separation of histograms for positivelyand negatively charged particles, a new scheme to perform parallel Monte Carlo simulations and a newrandom number generator.Summary of revisions

1. Additional features of MCMC2 version 1.1(a) Histograms for positively and negatively charged particles. The first version ofMCMC2 was able

to produce a large variety of histograms to investigate the structure of charged clusters and theirpropensity to undergo evaporation: angular histograms for charged particles (‘‘angdist-yyy.dat’’and ‘‘surfang-yyy.dat’’ files), radial histograms (‘‘rhist-x-yyy.dat’’ files), andhistograms for trackingthe number of charged surface particles (‘‘surfnb-yyy.dat’’ files) or the number of particles thattend to evaporate (‘‘evapnb-yyy.dat’’ files). Although the program could handle clusters composedof both positively and negatively charged particles, these histograms did not separate these twoclasses of particles. This has been corrected by the addition of two columns in the histogram files.These columns are labelled with signs ‘‘+’’ and ‘‘−’’ that refer to positively and negatively chargedparticles, respectively. The study of clusters composed of both positively and negatively chargedparticles should therefore be made easier [2]. Most of the keywords corresponding to histogramshave not been changed except the keyword for radial histograms that now includes the possibilitynot to print any histogram (see keyword ‘‘RADIAL’’ in the ‘‘Modified keywords’’ section).

(b) Full replica pair exchange in Monte Carlo simulations. In previous publications [2, 3] the ex-change between replica configurations was based on two steps: the random selection of onereplica pair and the calculation of the Parallel Tempering or Parallel Charging criteria to acceptor reject the exchange of configurations between replicas. By default these exchanges were at-tempted every ns = 10 Monte Carlo sweeps, where a sweep is composed of N Monte Carlo stepsfor an N-particle cluster. The main goal of parallel Monte Carlo algorithms is to improve the con-vergence speed and wemight thus be interested in attempting more than one exchange of replicaconfigurations every ns sweeps, provided that ns is large enough for the final configuration (af-ter the ns sweeps) to be decorrelated from the initial configuration (before the ns sweeps). Inthe present version of the code we have implemented a second way to perform parallel MonteCarlo simulations where N/2 exchanges of replica configurations are attempted every ns sweepsby alternatively selecting odd pairs (i.e., pairs involving replicas 1–2, 3–4, etc.) and even pairs (i.e.,pairs involving replicas 2–3, 4–5, etc.). The two methods should be equivalent for large numbersof sweeps although the second method proposed in the present version is deemed to convergefaster than the method based on random choices of replica pairs. Modifications brought to key-words ‘‘PC’’ and ‘‘PT’’ are reported in the ‘‘Modified keywords’’ section.

(c) Lagged Fibonacci random number generator. Random numbers are generated by means of aLAPACK random number generator in the first version of MCMC2 [1]. Knowing that the main goalof MCMC2 is to handle Monte Carlo simulations on large clusters (from hundreds up to thousandsof particles at least), we were particularly concerned in delivering a random number generatoralready thoroughly tested on neutral or charged clusters for such large systems. One of us (ML) de-veloped a lagged Fibonacci random number generator based on the works by Kirkpatrick et al. [4]and Bhanot et al. [5]. In particular, this random number generator was successfully used in accu-rate diffusion quantumMonte Carlo investigations of the structure and energetics of small heliumclusters [6], a benchmark convergence study of the dissociation energy of the HF dimer [7], and thefragmentation dynamics of ionized dopedheliumclusters [8, 9].Modifications brought to keyword‘‘SEED’’ are reported in the ‘‘Modified keywords’’ section. As an example, heat capacity curves ofneutral A100 clusters obtained after performing Parallel Tempering simulations with the two ran-dom number generators used in MCMC2 are plotted in Fig. 1 of Supplementary materials. We cannotice that the melting peak is perfectly defined by all four methods, some small deviations mayoccur for the premelting peak whose convergence is harder to achieve.

2. Modifications or corrections to MCMC2 version 1.0(a) In MCMC2 (version 1.0) we have computed the heat capacity related to Coulomb energy fluctua-

tions that we have improperly called the ‘‘Coulomb part of heat capacity’’. Indeed, when defining

1190 D.A. Bonhommeau et al. / Computer Physics Communications 185 (2014) 1188–1191

the potential energy U of a charged cluster as the sum of the Lennard-Jones (LJ) interactions V andthe Coulomb interactions Vc we can define several quantities:

Total heat capacity, CV : CV = CCoulV + C LJ

V =1

kBT 2

⟨U2

⟩ − ⟨U⟩2

Coulomb contribution to CV : CCoulV =

1kBT 2 (⟨UVc⟩ − ⟨U⟩⟨Vc⟩)

LJ contribution to CV : C LJV =

1kBT 2 (⟨UV ⟩ − ⟨U⟩⟨V ⟩)

Coulomb heat capacity: CCoulV , fluct =

1kBT 2

⟨V 2

c ⟩ − ⟨Vc⟩2

LJ heat capacity: C LJV , fluct =

1kBT 2

⟨V 2

⟩ − ⟨V ⟩2

wherewe call ‘‘Coulomb heat capacity’’ and ‘‘LJ heat capacity’’ the heat capacities that are obtainedby calculating the fluctuations of Coulomb and LJ interactions, respectively. After some calcula-tions, we can find two formulas to express CV as a function of CCoul

V , C LJV , CCoul

V , fluct and C LJV , fluct :

CV = 2CCoulV − CCoul

V , fluct + C LJV , fluct

CV = 2C LJV + CCoul

V , fluct − C LJV , fluct

which leads toCCoulV , fluct − C LJ

V , fluct = CCoulV − C LJ

V . (1)The difference between Coulomb and LJ heat capacities thusmatches the difference between the LJand Coulomb contributions to heat capacity. However, only the latter quantities have a clear physi-calmeaning for charged clusters bound by both LJ and Coulomb interactions andwehave thereforereplaced the calculation of CCoul

V , fluct by CCoulV in the present version of MCMC2. Note that the contri-

bution of polarisation energy to heat capacity has also been added when polarisation is included.(b) Several minor corrections were brought into version 1.1:

i. Theword ‘‘RMS’’ (that usually stands for RootMean Square)waswronglywritten several timesin the MCMC2-yyy.out output files instead of ‘‘standard deviation’’ that is abbreviated ‘‘stddev.’’. This is corrected in the present version of the code.

ii. The syntax used to generate some file names seemed not to be recognized by recent pgf95compilers and has been modified. This does not affect the user since file names have not beenmodified.

iii. The default minimum temperature for the geometric temperature scale is set to 10−10 insteadof 0 that would have led to divergence if not changed by the user.

iv. Test cases are slightly modified to take into account the new features of version 1.1 and theREADME.txt files are more detailed. The START folders and the obsolete open PBS scripts areremoved from the distribution.

3. Modified keywordsWe present in this section a list of the modified keywords.

• PC METHOD imcpc REPLICAS N_repc EVERY pc_every TEMPERATURE kt0 QREF q_ref : Setting ofparameters for runningMCPC simulations. imcpc is an integer dedicated to the choice of the parallelscheme to be usedwhenperforming parallel charging simulations (0= randomchoice of one replicapair with MCPC2, 1= random choice of one replica pair with MCPC1, 2= use of even or odd replicapairs with MCPC2, and 3 = use of even or odd replica pairs with MCPC1). N_repc is the numberof replicas. This number must equal the number of CPUs used for the simulation: the MCMC2

code assumes that one replica is run on one CPU. Any different choice will lead to job abortion.Configuration swapping between replicas is tested every pc_every MC sweeps. kt0 is the referencetemperature of the simulation (identical for all the replicas). AMCPC simulation will always use kt0whatever the originally selected temperature scale (constant, geometric, or adjusted, see keyword‘‘TEMPERATURE’’). Defining a temperature scale in the setup file will only result in the replacementof these temperatures by kt0 (beware: if the user does not give a value to kt0, the default will beused). q_ref is the reference charge used forMCPC1 simulations. ForMCPC2 simulations, the chargesq are read from the input configuration files. The simulation is stopped if the input files have chargesdifferent from q_ref . For deactivating MCPC simulations, choose pc_every = 0.Default: imcpc = 0, N_repc = 0, pc_every = 0, kt0 = 0.1, q_ref = 0.

• PT METHOD imcpt REPLICAS N_rept EVERY pt_every: Setting of parameters for running MCPTsimulations. imcpt is an integer dedicated to the choice of the parallel scheme to be used whenperforming parallel tempering simulations (0 = random choice of one replica pair and 1 =

alternative use of even and odd replica pairs). N_rept is the number of replicas. This number mustequal the number of CPUs used for the simulation: the MCMC2 code assumes that one replica is runon one CPU. Any different choicewill lead to job abortion. Configuration swapping between replicasis tested every pt_everyMC sweeps. For deactivating MCPT simulations, choose pt_every = 0.Default: imcpt = 0, N_rept = 0 and pt_every = 0.

• RADIAL USE lrad GRIDRCOM deltagrid stepgrid PARTICLE radtyp: Setting parameters for plottingone-particle radial histograms (whether lrad= .true.). The radial histograms cannot extend beyondthe radius of the Monte Carlo container (see keyword ‘‘CONTAINER’’), for graphical purposes wehowever allowed the user to add a small distance deltagrid to the grid size (= radius + deltagrid).stepgrid is the grid step for one-particle radial histogramswith respect to the cluster center-of-mass.

D.A. Bonhommeau et al. / Computer Physics Communications 185 (2014) 1188–1191 1191

The grid origin is hardcoded to zero since these histograms are calculatedwith respect to the clustercenter ofmass. The numberNgrid of grid points is automatically determined in the code from the gridsize and the grid step. radtyp enables the user to specify the type of particles to be considered forplotting radial histograms (0 = no distribution, 1 = all the particles without any distinction, 2 =

charged particles only, 3 = neutral particles only, 4 = all the previous histograms (4 = 1+ 2+ 3)).Default: lrad= .false., deltagrid = 0, stepgrid = 0.1, radtyp = 0.

• SEEDMETHOD irand INITIALIZATION seed(1) seed(2) seed(3) seed(4) SCALINGnsc(1) nsc(2) nsc(3)nsc(4): choice of the random number generator. irand is an integer that enables the user to selecta random number generator (0 = LAPACK random number generator and 1 = lagged Fibonaccirandom number generator). Depending on the value of irand the random seeds and scaling factorsmay be different. If irand = 0, seed(j) (j ∈ {1, 2, 3, 4}) are positive integer seeds that must bebelow 4095 and seed(4) must be odd. N_rep (number of replicas) secondary seeds are generatedfrom these primary seeds and the knowledge of scaling factors nsc(j) (j ∈ {1, 2, 3, 4}). By default,these secondary seeds are obtained by doing seedi(j) = seed(j) + i × nsc(j) for j ∈ {1, 2, 3} andseedi(4) = seed(4)+2i×nsc(4) (since all the seeds seedi(4)must remain odd) where i is the replicanumber. If irand = 1, only one seed (namely seed(1)) is expected by the program and secondaryseeds are also produced by doing seedi(1) = seed(1) + i × nsc(1).Default: seed(j) = 0 for j ∈ {1, 2, 3}, seed(4) = 1, nsc(j) = 1 for j ∈ {1, 2, 3, 4} (i.e., seedi(j) = i forj ∈ {1, 2, 3} and seedi(4) = 1 + 2i).

References[1] D.A. Bonhommeau, M.-P. Gaigeot, Comput. Phys. Commun. 184 (2013) 873–884.[2] D.A. Bonhommeau, R. Spezia, M.-P. Gaigeot, J. Chem. Phys. 136 (2012) 184503.[3] M.A. Miller, D.A. Bonhommeau, C.J. Heard, Y. Shin, R. Spezia, M.-P. Gaigeot, J. Phys.: Condens. Matter.24 (2012) 284130.[4] S. Kirkpatrick, E. P. Stoll, J. Comp. Phys. 40 (1981) 517.[5] G. Bhanot, D. Duke, R. Salvador, Phys. Rev. B 33 (1986) 7841.[6] M. Lewerenz, J. Chem. Phys. 106 (1997) 4596.[7] M. Mladenović, M. Lewerenz, Chem. Phys. Lett. 321 (2000) 135.[8] D. Bonhommeau, P. T. Lake, Jr., C. Le Quiniou, M. Lewerenz, N. Halberstadt, J. Chem. Phys. 126 (2007)051104.[9] D. Bonhommeau, M. Lewerenz, N. Halberstadt, J. Chem. Phys. 128 (2008) 054302.Acknowledgements

Dr. Mark A. Miller is gratefully acknowledged for providing some routines and valuable advice duringthe development of the code. The IDRIS national computer center and the ROMEO computer center ofChampagne-Ardenne are also acknowledged. This work was supported by the ‘‘Génopôle d’Evry’’ througha post-doctoral fellowship (DAB), by the ‘‘Centre National de la Recherche Scientifique’’ (CNRS) throughan excellence chair (DAB), and by a Partenariat Hubert Curien Alliance Program (MPG).

© 2013 Elsevier B.V. All rights reserved.

Supplementary material used in this article.

Appendix A. Supplementary material

Supplementary material related to this article can be found online at http://dx.doi.org/10.1016/j.cpc.2013.09.026.