Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

Embed Size (px)

Citation preview

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    1/94

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    2/94

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    3/94

    Acknowledgements

    There are many people that helped make the work in this thesis possible. First and foremost I would like to thankmy faculty advisor Igor Mezic who gave me invaluable insight while also giving me the breathing room to chart someaspects of my own research direction. In addition to this, other faculty in the Mechanical Engineering departmentat UCSB helped to guide me through this work including Jeff Moehlis who is an excellent instructor. Karl JohanAstrom, who is only at UCSB for portions of the year gave me good research ideas and also acted as a role modelwith respect to everything else involved with academic life. His energy and passion for this field is extraordinary.I also received much help from many outside of UCSB (at UTRC) including Clas Jacobson, Scott Bortoff, andAndrzej Banaszuk (Andrzej has been a mentor of mine for over a decade now). I would also like to acknowledge theinformal discussions with my colleagues at other universities including Thordur Runolfsson (UO), Prabir Barooah(UF), Prashant Mehta (UIUC), and Umesh Vaidya (ISU). It was pleasant to work with the other group members inIgors group and I especially thank Yueheng Lan for his long technical discussions about my research. For the DNAwork, there was some initial interaction with Jerry Marsden and Philip DuToit (both from Caltech) which helpedgive me a head start on my results. None of this work would be possible without the support of my friends andfamily and most notably my yoga gurus (Heather Tiddens and Deb Dobbin). Finally, I would like to acknowledgethat this work would not be possible without funding (through grants awarded to Igor) in part by DARPA DSOunder AFOSR contract FA9550-07-C-0024 and AFOSR Grant FA9550-06-1-0088.

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    4/94

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    5/94

    Contents

    1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    Part I General Concepts

    2 General Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1 Background of Hamiltonian Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Canonical Coordinate Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    2.2.1 Generating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.2 Testing a Canonical Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.3 Modal Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.4 Action-Angle Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    2.3 Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3.1 General Averaging (non periodic functions) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3.2 Periodic Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3.3 Multiple Frequency Averaging and Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    2.3.4 Averaging Performed in this Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.4 Geometric Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.4.1 Symplectic Integrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.4.2 Conservation of Energy with Symplectic Integrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.4.3 Evaluation of Different Geometric Numerical Integration Methods . . . . . . . . . . . . . . . . . . . . . . . . . 152.4.4 Methods used in this Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    2.5 Molecular Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.5.1 Nose-Hoover Thermostats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.5.2 Langevin Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    2.6 Internal Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.6.1 Identification of Internal Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.6.2 Resonance in our System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    2.7 Dynamical Systems on Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    3 Stochastic Dynamics, the Langevin Equation, and Escape Rate Theory . . . . . . . . . . . . . . . . . . . . . . 213.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.2 Conservative Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.3 Equilibrium Statistical Mechanics and Boltzmann Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.4 Nonlinear Langevin Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.5 Linear Langevin Equation and the Fluctuation Dissipation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.6 Displacement of a Brownian Particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.7 The Fokker-Planck Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.8 Klein-Kramers Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    6/94

    6 Contents

    3.8.1 The Smoluchowski Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.9 Escape Rate Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    3.9.1 Transition State Theory (TST). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.9.2 Kramers Escape Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.9.3 Summary of Rate Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.9.4 Intermediate to High Damping (IHD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    3.9.5 Very Low Damping (VLD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.9.6 Very High Damping (VHD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.9.7 Mean First Passage Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    4 Dynamics, Symmetry, and Escape of Oscillator Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.1 Oscillator Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.2 The Effect of Symmetry on the Response of an Oscillator Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.3 Energy Transfer and Resonance in Oscillator Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.4 Localization in Oscillator Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.5 Escape of Oscillator Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    Part II Studies: DNA-inspired model, and Bi-Stable Cubic Model

    5 Morse Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.1 Background on Biomolecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    5.1.1 Structure of DNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.1.2 DNA Dynamical Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    5.2 Typical DNA Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.2.1 Nonlinear DNA Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    5.3 Our Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.4 Activation Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    5.4.1 Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.5 Energy Transfer Mechanisms During Activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    6 Analysis of the Coupled Duffing Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536.3 Modal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

    6.3.1 Coupling Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566.3.2 Nonlinear Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566.3.3 Kinetic Energy in Modal Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576.3.4 Hamiltons Equations of Motion in Modal Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586.3.5 Numerical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    6.4 Action Angle Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.4.1 Coupling Terms in Action Angle Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.4.2 Nonlinear Terms in Action Angle Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.4.3 Kinetic Energy in Action Angle Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    6.4.4 Hamiltons Equations in Action Angle Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606.5 Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    6.5.1 Coupling Terms in the Averaged Sense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.5.2 Nonlinear Terms in the Averaged Sense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.5.3 Kinetic Energy in the Averaged Sense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.5.4 Hamiltons Equations of Motion in an Averaged Sense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    6.6 Quantification of Activation Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626.6.1 Numerical Quantification of Activation Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626.6.2 Analytical Quantification of Activation Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    7/94

    Contents 7

    6.6.3 Bifurcation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666.7 Calculation of Activation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666.8 Energy Cascade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686.9 Stochastic Activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    6.9.1 Effect of Damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696.9.2 Effect of Nonlinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

    6.9.3 Effect of Number of Oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706.9.4 Comparison With Kramers Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716.9.5 Effect of Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746.9.6 Stochastic Activation with Targeted Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

    7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    8/94

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    9/94

    1

    Introduction

    In this thesis we study the dynamical behavior of networks of interacting multi-stable subsystems. On their own,many physical systems exhibit multiple stable operating points, each may provide different benefits to its operation.Because of this, it is typically important to control the dynamics to be near one specific equilibrium. On the otherhand, security, efficiency, and performance are driving engineering designs to be more networked than they have beenin the past. When these networks are generated from a series of multi-stable subsystems, the entire network is leftwith multiple global conformations. Again, each conformation offers different benefits at certain times and therefore acontroller is needed to transition between these conformations. Fortunately, instabilities exist in the global dynamicswhich provide an efficient means to escape from one conformation to another with very little actuation. In this thesiswe draw on tools from dynamical systems and chemical kinetics to qualify and quantify these actuation requirementsand other concepts of the mechanics of this process.

    As a test-bed for our theory, we study a chain of biologically-inspired nonlinear oscillators which are bi-stable(two equilibria) resulting in a network with two global conformations. We investigate re-conformation between thesetwo states and how specific or targeted disturbance affects the transition process. It turns out that the strongcoupling creates a basis dominated by Fourier modes and the transition process is therefore driven when resonanceallows energy transfer between these dominating modes. To better understand this, we derive a multi-phase averagedapproximation which illustrates the influence of canonical actions in the Fourier modes. An activation conditionthat predicts the minimum amount of energy to achieve activation is then derived as a function of energy in the

    Fourier modes. We also find an unexpected inverse energy cascade which funnels energy from all modes to one coarsecoordinate during the activation process. The prediction tools are derived for deterministic dynamics while we alsopresent analogous behavior in the stochastic setting and ultimately a divergence of Kramers activation behaviorunder targeted activation conditions.

    Many networks (whether they are manmade or not) contain multiple equilibria and understanding the preferencetowards one particular equilibrium is often one of the most important aspects of their analysis. In biological systemsmulti-stability is an essential part of cellular function Angeli et al. [2004] including the opening and closing of the DNAhelix. Different stable conformations exist depending on how molecules interact with their environment (includinginteraction with other molecules, or temperature for instance). Similarly, in physical chemistry, reaction kineticsfollow the principle where reactants and products exist on a potential landscape separated by a high energy saddle.Chemical reaction is realized as a transition of a reaction coordinate across an energy barrier allowing it to proceed tothe other side of the potential. This process is typically analyzed in the chemical kinetics community as an activationor escape process. In this thesis we study a similar potential while divorcing ourselves from the strict application to

    chemical reaction theory.There are certainly other examples of systems which undergo global conformation change. As a second example

    consider the network of nonlinear systems that create a regional power grid. Clearly, it is desired that the powergrid is immune to any perturbation while in some situations specific perturbation affects the system in a grand andmalicious way (e.g. a tree branch fault resulting in major grid failure, as with the 2003 northeast US power blackout).An example of analysis of this type may be found in Susuki et al. [2008]. This is another example of a large networkedsystem undergoing an escape process.

    Control design for multi-stable systems has been an ongoing topic in the past due to the desire to operate atone selected equilibrium. An example where a nonlinear controller was designed to stabilize a inverted pendulumat its upright equilibrium can be found in Astrom et al. [2008]. Examples where multi-stability is one of the design

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    10/94

    2 1 Introduction

    considerations are surfacing in networked or coordinated control systems (see Paley et al. [2004] or Bai et al. [2009] forexample). For example, autonomous search vehicles possess local control for avoidance (repulsion at close distances)and coordination (attraction at far distances) which creates potential forces similar to what a molecule experiences(e.g. the Lennard Jones or Morse potential). On a high level, supervisory control may schedule the network of agentsto be in one conformation when traveling towards a search location and then switch this conformation to notablydifferent motion upon arrival (to chaotic search for example). Understanding the switching behavior between these

    two global equilibria is important for sensitivity analysis and controller synthesis. There is an abundance of otherbiological, chemical and physical systems which have functional behavior similar to these (e.g. MEMS systems, neuralsystems, superconductor arrays).

    The escape process that we study occurs in a chain of strongly coupled oscillators. Different phenomena includingbreathers drive the escape process when the coupling becomes weak (see Hennig et al. [2008]). In our case we observea collective and coordinated escape process because of the strong coupling which creates a spatial backbone forthe dynamics. We study the case where this backbone is disturbed by asymmetry which highlights that the escapeprocess is then impeded. This breaking of symmetry is discussed in the context of noise mitigation in jet engineswhere acoustic oscillations were drastically reduced by altering the symmetry of the design. In this case, design ofroot dynamics rather than supplemental control was a solution to a real world engineering problem.

    In summary, in this thesis we study network reconfiguration in both a static (energy requirements) and dynamic(rates, energy cascades) sense. Much of the prior art in this field is tied to either biology or physical chemistry.However, we point out that as engineered systems become more inspired by biological or chemical processes, the

    methods of analysis in this thesis will become a staple of engineering design in the years to come.

    1.1 Organization

    The organization of this dissertation is separated into two parts: Part 1: Theory, and Part 2: Applications. In Part 1, weintroduce many of the concepts that are needed for the results in Part 2. The contents of Part 1 include introductionsto Canonical Transformations, Nonlinear Resonance, Averaging, Geometric Numerical Integration,Molecular Simulation, and a very brief introduction to Dynamical Systems on Graphs. Following this, a verythorough review of Statistical Mechanics is presented leading up to a summary of Transition State Theoryand Dynamics of Coupled Oscillators. All of the content in Part 1 is review of previous work and no uniqueresults are presented. At the end of each section we briefly describe how the tools fit into the applications in this

    thesis. If the reader is familiar the topics that are listed in bold above, this part may be skipped with no loss.The second part of the thesis applies techniques reviewed in Part 1 on two different example systems. Thefirst system we analyze is a model inspired by macro level dynamics of DNA. As it turns out, the nonlinearityin this model hinders analytical insight (it is too complex) and the results in this chapter are all numerical. In thefollowing chapter, a generic model is studied which conveniently exhibits similar dynamic behavior to the bio-inspiredmodel while lending itself to analytical results which solidify both qualitative and quantitative understanding of thetransition properties in these high dimensional oscillator systems.

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    11/94

    Part I

    General Concepts Including Canonical Coordinate Transformations,

    Resonance, Averaging, and Geometric Numerical Integration

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    12/94

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    13/94

    2

    General Concepts

    In this chapter we review general tools in Dynamical Systems that will be utilized for the remainder of the thesis.We will begin with basic concepts of Hamiltonian systems including canonical transformations and then introduceresonance including internal and nonlinear resonance phenomena. We then outline the perturbative analysis ofAveraging, Geometric Numerical Integration, and include some basic insight into Molecular Simulation. All conceptshere are deterministic which is a precursor for the next chapter which addresses stochastic behavior of dynamicalsystems undergoing meta-stable transitions.

    2.1 Background of Hamiltonian Systems

    William Hamilton first introduced a structural form for dynamical systems in 1834 and its importance was imme-diately recognized. Hamiltonian systems are useful in many area such as classical mechanics, celestial mechanics,as well as classical and semi-classical molecular dynamics (good introductory resources are Abraham and Marsden[1978], Arnold [1989]). For this formulism, in the case of a classical conservative system of particles, q denotes thevector generalized coordinates and p the conjugate momentum vector (in a mechanical system without constraint, qis a position vector x). A Hamiltonian system is then a system of differential equations derived by a scalar functionH(p,q) given by:

    qi = Hpi , pi = Hqi ,where i = 1, . . . N denotes the states (or masses) of the system and the over-dot is the time derivative.

    In traditional mechanics, the Hamiltonian is the total amount of energy of the system which in this formulationis conserved. The conservation of energy is shown by noting that the time derivative of the Hamiltonian is constant(H = Hpi pi + Hqi qi = 0). A special case of a Hamiltonian system occurs when the Hamiltonian is separableH(q, p) = T(p) + U(q) where T and U are kinetic and potential energies respectively.

    The vector field on which the Hamiltonian is defined is a symplectic manifold which means it has unique propertiesin terms of differential geometry (discussed later). Even further, by Liouvilles theorem trajectories on this manifoldpreserve volume (or area in 2D). To study this volume preservation we define the dynamics as a symplectic operatorwhich takes any initial condition q(0), p(0) to a final state q, p. If we define t as the symplectic operator on thismanifold, the evolution written as

    pq

    =p(t)

    q(t)

    = tp(0)

    q(0)

    .

    This operator will become important later in the discussion of Geometric Numerical Integration because it willcharacterize the discretized Hamiltonian system. In the two dimensional case, area conservation is confirmed by the

    determinant condition: dett(p,q)(p,q)

    = 1. In higher dimensions, if we define dp, dq as the differentials of t the

    operator is symplectic under the differential form condition: dp dq = dp dq. The symplectic nature of a vectorfield ensures preservation of certain features of the problem (geometric structures, conservation laws, symmetries,asymptotic behaviors, order, etc). With this in mind we note that all Hamiltonian systems are symplectic (or volumepreserving) while all volume preserving systems are not necessarily Hamiltonian.

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    14/94

    6 2 General Concepts

    2.2 Canonical Coordinate Transformations

    Coordinate transforms are typically sought to better understand particular dynamics of a system (i.e. to look at themfrom a more illustrative angle or viewpoint so to speak). We will use this approach to simplify the motion of a networkof oscillators into a coarse description that better describes the global behavior of the dynamics. When performingcoordinate transformations no new physics are introduced while some information in the dynamics may be masked

    or hidden which may make the equations easier to solve or analyze. Unfortunately additional information may bemathematically lost if care is not taken. As an example, for Hamiltonian systems which contain specific structure, animproper transform will destroy the conjugate structure or it may add dissipation to the dynamics. Using canonicaltransformations is an approach which transforms a system from one conjugate set of variables to a second conjugateset of variables in which the new Hamiltons equations are the correct equations of motion (i.e. Liouvilles theoremstill applies). Because of the importance of canonical transforms, this section describes methods to generate or testwhether a set of transform rules is canonical and concludes by presenting two specific transforms which will be usedin the following chapters of this thesis. A good introduction to canonical transforms is in either Hand and Finch[1998] or Lichtenberg and Lieberman [1992], while a more thorough discussion may be found in Goldstein [1959] orWhittaker [1944].

    Two approaches may be taken to define a canonical transformation. One approach is to use a generating functionwhich guarantees that the transformation is canonical. The second approach is to heuristically define a transformand later determine if it is canonical. Both of these approaches are described below.

    2.2.1 Generating Functions

    A generating function provides an automatic way to perform canonical transformations. It is a function that containsone of the old coordinates (q or p) and one of the new coordinates (q or p). Considering all the different permutationsof these cases, there are four different types of generating functions. Generating functions are typically chosen froman educated guess but can also be derived in some cases (as we will see below with a harmonic oscillator). They

    do have functional requirements and the necessary and sufficient condition is that 2Wiqq = 0. For each of the four

    types of generating functions there exists partial differential equations that define the transform rules between thetwo coordinate systems. The four generating functions and their associated rules are given below.

    W1 = W1(q, q, t), pi =W1

    qi

    , pi =W1

    qi

    (2.1)

    W2 = W2(q, p,t), pi =W2qi

    , qi =W2pi

    (2.2)

    W3 = W3(p, q, t), qi = W3pi

    , pi = W2qi

    (2.3)

    W4 = W4(p, p,t), qi = W4pi

    , qi =W2pi

    . (2.4)

    2.2.2 Testing a Canonical Transform

    If one does not prefer to use a generating function, or if the dynamics otherwise limit its use, the transformationmust be verified by hand after it is constructed. Three different methods for assessing whether a transformation is

    canonical are given below. They include testing whether the Poisson bracket is invariant, using differential forms, andidentifying whether the Jacobian of the transformation matrix is symplectic (which is matrix that satisfies conditionsdefined below).

    Using Poisson Brackets to Determine if a Transformation is Canonical

    One way to determine if a transformation is canonical is to calculate the appropriate Poisson bracket between the oldand new variables. A transformation q, p q, p will be canonical if {qi, pj}q,p = ij, {pi, pj}q,p = 0, {qi, qj}q,p = 0.This is equivalent to requiring that the Jacobian (q,p)(q,p) is a symplectic matrix.

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    15/94

    2.2 Canonical Coordinate Transformations 7

    Using Differential Forms to Determine if a Transformation is Canonical

    Using differential forms is just another approach to determine if the mapping between new and old variables iscanonical. In some cases it is algebraically easier to perform the test using this approach. The conservation ofprojected area, which is equivalent to showing that a transformation is canonical is:

    i

    dpi dqi = i

    dpi dqi. (2.5)

    Testing whether the Transformation is Symplectic

    If the Jacobian of a coordinate transform is symplectic, the transformation is canonical. This also signifies thatthe sum of the projected areas in the final coordinate system is equal to that of the original coordinate system

    pidqi =

    pidqi. Let S be the skew symmetric matrix, and J be the Jacobian:

    S =

    0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 ... ... ... ... . . . 0 0 0...

    ... 0 0 0 0 0 0...

    ... 0 0 0 0 0 1...

    ... 0 0 0 0 1 0

    J =

    q1q1

    q1p1

    q1q2

    q1pN

    p1q1

    p1p1

    p1q2

    p1pN

    q2q1q2p1

    q2q2 q2pN...

    ...pNq1

    pNp1

    pNq2

    pNpN

    . (2.6)

    A necessary and sufficient condition that the transformation is symplectic is that JSJT = S. Now that we havedefined what a good coordinate transformation is, we present two different examples that will facilitate the analysisof the problems studied in this work.

    2.2.3 Modal Coordinates

    For a linear chain of coupled oscillators it is often useful to project or otherwise transform the dynamics onto adecoupled set of coordinates. These coordinates are the linear normal modes and the transformation can be obtainedusing a normalized discrete Fourier transform. In the 1950s, Fermi-Pasta-Ulam formulated a now infamous systemof coupled oscillators. Their analysis includes canonical transformations onto modal coordinates using Fermi et al.[1955]:

    qk =

    2

    N

    Ni=1

    qi sin

    ik

    N

    (2.7)

    pk =

    2

    N

    Ni=1

    pi sin

    ik

    N

    (2.8)

    where qk and pk are modal positions and velocities respectively and the linear modal frequencies are Lichtenberg et al.[2007]

    k = 2 sin

    k

    2N + 2

    (2.9)

    k = 2 sin

    k

    2N

    with periodic boundary. (2.10)

    The energy in each mode becomes simply

    Ek =1

    2

    p2k +

    2kq

    2k

    (2.11)

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    16/94

    8 2 General Concepts

    This transformation is canonical which can be confirmed by using any of the three methods described above.For our case, we are interested in analyzing the zeroth mode of dynamics (which is the average of all positions), and

    this is not included in the FPU transformations. To deal with this a second transformation was defined DuToit et al.[2009]

    qk =

    2/N 12 q0,

    N/2

    1

    i=1

    qi cos(2ik/N), cos(k)2

    qN/2,

    N/2

    1

    i=1

    qN/2+i sin(2ik/N) (2.12)

    pk =

    2/N

    12 p0,

    N/21i=1

    pi cos(2ik/N),cos(k)

    2pN/2,

    N/21i=1

    pN/2+i sin(2ik/N)

    (2.13)

    and the frequencies with periodic boundary conditions are

    k =

    2cos(2k/N) 2. (2.14)The frequencies take a sinusoidal form wherein they are linearly increasing at low mode numbers and gather at highermode numbers (at a value of 2.0). These frequencies are irrationally related along the way which is important as thismeans there is no possibility of resonance in the linear normal modes.

    In this transformation, the first term q0 is proportional to the averaged coordinate or the zeroth mode. Thecoordinates q1N/21 are the typical Fourier modes. Note that these are full mode shapes while in the case of theFPU transformation, half-wave modes are included (this does not have any implications, we are just pointing thisout). As before, this transformation is canonical and projecting the dynamics onto these modes will not introduceany dissipation or have any other adverse effects to the Hamiltonian structure.

    Both of the modal transforms described above are linear and there is no coupling between the generalized coordi-nates and momenta in the equations and therefore this transform can be represented as a matrix M. The elementsof this matrix are equivalent to the elements of the Jacobian 2.6 with some rearrangement. The matrix M, whichmaps between standard and modal coordinates (x, x) = (Mx, Mx) is

    M = 2N

    12

    cos11 cosN2 1

    1

    (1)12

    sin

    N2 + 1

    1 sin

    N

    11

    2 cos12 cos N2 12 (1)2

    2 sinN2 + 1

    2 sin N2... ...

    12

    cos1N cosN2

    1N (1)N2 sin N2 + 1N sin

    N

    N

    (2.15)

    N = (N 1)

    where the wave number is:

    i =i2

    Ni = {1, 2, . . . , N}. (2.16)

    There are special properties of this matrix when applied to a homogeneous oscillator array which help in the analysisto follow and are presented in the Appendix. With this projection, original position variables become a sum of M

    different modal contributions:

    xi =

    M1j=0

    Mi,jxj . (2.17)

    We start with 0 as the index because x0 will then represent the amplitude of the zeroth mode. Note that with thisconvention, the first column of the M matrix is the 0th column which is not standard notation but convenient forour purposes.

    The discussion so far regarding the coordinate transformations between cartesian and modal coordinates wasintroduced for a purely linear problem. In this case the dynamics become uncoupled in the new coordinate system.

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    17/94

    2.2 Canonical Coordinate Transformations 9

    For the cases studied in Part II however, the problems will be nonlinear. When the dynamics are nonlinear, we usethe same transforms while the nonlinearity typically just adds a perturbation to the variables (i.e. the energy, thefrequencies, and the modal variables themselves). This procedure is also used for the projection onto action anglecoordinates as well.

    2.2.4 Action-Angle Coordinates

    The modal coordinate system we have been discussing coarsens the global spatial characteristics of the dynamics ofthe oscillator array we are studying. A second canonical transform which is ubiquitous in the physics communityis action-angle coordinates. These coordinates characterize 1) a notion of energy and 2) adiabatic invariants in themotion of a nonlinear system. For time-invariant Hamiltonian systems, N constants of motion exist if it is integrable(these constants must be in involution, the Poisson brackets vanish identically). These constants of motion, whenset in canonical coordinates are the actions (canonical momentum) and the generalized coordinates are the angles.Intuitively, when mapping the state space of a harmonic oscillator, the action is the area spanned by the periodicorbit, and the angle is the location on the circumference of the trajectory (see Figure 2.1). One of the main benefitsof using action-angle coordinates is the characterization of energies while another benefit is that you may solve thefrequencies of oscillation without solving for details of the motion.

    Fig. 2.1. State space of an oscillator pointing out the action and angle variables

    One interesting property of action variables is that they depict adiabatic invariance in the system. That is, ina system in which parameters are varied slowly in time, the general behavior of the system may vary quickly but

    the action variables remain nearly constant. This quality is a powerful concept in mechanics and considering theseinvariants as discrete quantities led to fundamental understanding in quantum mechanics.

    At this point we derive the coordinate transform rules for a harmonic oscillator in action-angle coordinates. Fora harmonic oscillator, where we assume the motion is periodic (in q, p) the generating function W(q, ) will beperiodic in . Because of this, we have

    dW = 0. From the fundamental properties of the generating function we

    have dW = pdq + Jd, and taking the contour integral we have the equation for the action in a harmonic oscillator

    pdq =

    Jd J = 1

    2

    pdq. (2.18)

    The generating function for a harmonic oscillator can be found by evaluating the indefinite integral of dW. Indoing this we have

    dW =

    pdq Jd, where p = 2E 2q2 (where E is the constant energy contour). This

    results in the generating function W = 12q2 cot . Using the rules for a type-one generating function (p = W1q ,

    J = W1 ) we can then derive the action-angle coordinate transform rules

    qi =

    2Jii

    sin i, pi =

    2Jii cos i. (2.19)

    For periodic Hamiltonian systems the action-angle transformation takes a Hamiltonian H(q, p) to H(J) which isonly a function of the action (J). Hamiltons equations then become:

    Ji = Hi

    , i =HJi

    = . (2.20)

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    18/94

    10 2 General Concepts

    The rate of change of the angle is the angular frequency i(J) = i where we use the tilde to describe this as a timedependent frequency which may not be constant.

    2.3 Averaging

    Since exact solutions for nonlinear differential equations are typically out of reach, we often seek an approximatesolution that closely captures the original dynamics. These approximations include both numerical (Section 2.4)or asymptotic solutions to bridge the gap between the full nonlinear system and a model which is tractable foranalysis. In this section we discuss approximate perturbed solutions to differential equations using formal averaging.The theory of averaging began in the 18th century with the study of celestial motion when analysis progressedfrom the two-body problem to its perturbation which was expected to better capture full celestial interactions (i.e.,three-body interaction). The theory of averaging began with Laplace and Lagrange while the needed formalismfor Hamiltonian systems was introduced by Jacobi (with canonical perturbation theory). The theory of averag-ing was advanced further with the work of Poincare and other scientist including Van der Pol (a short historyis available in Sanders et al. [2007]). A good set of references on averaging and perturbation methods in generalare Arnold [1988], Samoilenko and Petryshyn [2004], Sanders et al. [2007] or Lichtenberg and Lieberman [1992])Grebenikov et al. [2004] and Neishtadt [1997].

    When a conservative system is slightly perturbed, the first integrals of motion begin to evolve with time (they go

    from being constant to slowly varying). The averaging approach allows one to write an analytical expression for thisslow variation. This analytical expression is developed by expanding in terms of a small parameter () resulting in aseries solution. For a single degree of freedom, Linstedt and Poincare devised a way to develop series expansions thatare convergent. In higher dimension systems where resonance may occur (often leading to chaos), the series divergesand perturbation theory will not capture the actual solution. Fortunately, secular perturbation approaches have beendeveloped to account for this situation when the topology of state space changes across resonance islands.

    The terminology of averaging is synonymous with evaluating an integral over some independent variable. Theconnection between this integration process (in time for typical cases) and a series expansion of the dynamics can beexplained in the context of Hamiltonian systems. In these systems, the classical perturbation approach is to developa change of coordinates using a generating function where the new Hamiltonian is a function of the averaged actionalone (no angle dependence). In order to do this, characteristics of the generating function are chosen such thatangle dependence is eliminated and in order to accomplish this, the -dependent part of the Hamiltonian must beintegrated over the angle variable (this independent variable is similar to time in a conservative system).

    There are many benefits to averaging, most notably reduction of system dimension. An additional effect is that theperturbation in the dynamics is such that the time scales that remain after averaging are better suited for numericalintegration. That is, fast time scales have been removed causing the system to be less stiff. Unfortunately, there arealso disadvantages to averaging; all complex dynamics are reduced to a system that is everywhere integrable and anychaotic behavior of the system is lost. In addition, adiabatic invariants that are derived from perturbation theoryare not conserved for time scales much larger than their slow time evolution resulting in non-conservative behaviorknown as Arnold diffusion. These drawbacks are typically justified and formal averaging is often used as a productiveanalytical tool.

    The formal theory of averaging is typically divided into two different classes; periodic averaging and generalaveraging. In periodic averaging we usually obtain approximations on time scales of 1/ with O() accuracy. Onthe other hand, for general averaging, the accuracy limit becomes a function of the form (). These bounds canbe extended, for all positive time when the dynamics are limited to special circumstances including presence of an

    attractor (Sanders et al. [2007]). The bounds for both of these types of situations are briefly discussed below.

    2.3.1 General Averaging (non periodic functions)

    We will devote most of our attention to averaging in periodic systems because it is more relevant to the remainderof the study. However, for completeness we present averaging for general systems as well. If g(x) is continuous andbounded it is said to have an average if the limit Khalil [1996]:

    g(x) = limT

    1

    T

    t+Tt

    g(, x)d (2.21)

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    19/94

    2.3 Averaging 11

    exists and the 2-norm 1Tt+Tt

    g(, x)d g(x) (T) (2.22)

    is bounded by a continuous function that is strictly decreasing and goes to zero as T ((T) 0 as T ).What this says is that the averaged solutions stays close to the actual solution if the function (right hand side of theODE) is well behaved. The function (T) is a convergence function and in the periodic case is (T) = 1

    T.

    2.3.2 Periodic Averaging

    As the name implies, periodic averaging takes into account oscillatory dynamics (dynamics on tori). In this approachone is typically interested in separating the fast scale dynamics of the oscillation itself with slower dynamics of say amean or coarse observable. These slow dynamics are approximate integrals of motions otherwise known as adiabaticinvariants. There are at least two differences between the general and periodic averaging approaches. Unlike in generalaveraging, averaging in periodic systems only to be performed over one periodic cycle. The second exception dealswith the case of resonance between multiple frequencies which we will discuss later.

    To perform the averaging procedure, the system must be placed in standard form. The perturbed system may beplaced into standard form using the method of variation of parameters

    x = f1(x, t) + 2f2(x, t , ), x(0) = a (2.23)

    where fi are Taylor coefficients. By truncating the equation at and averaging over the time variable t we have theaveraged equation

    z = f1

    (z), z(0) = a (2.24)

    f1

    (z) =1

    T

    T0

    f1(z, s)ds (2.25)

    where fi are periodic in T. The averaging theorem states Sanders et al. [2007]

    x(t) z(t) c for 0 t L/ (2.26)where L and c are positive real constants. Equation 2.25 is an example of first order averaging and greater accuracycan be achieved with higher order averaging by increasing the order in which you truncate the series.

    The theory of averaging becomes more clear in classical action-angle coordinates where handling multiple frequen-cies is more straightforward. Consider a 2m-dimensional system where J Rm are first integrals or action variables(slow variables) and Tm are angles (fast variables)

    J = f(J,, ) (2.27)

    = (J) + g(J,, ) (2.28)

    where f and g are periodic in . The averaging approach takes this system to

    J = F(J) (2.29)

    where F(J) = 1(2)m

    Tm

    f(J,, 0)d (2.30)

    where Tm is the m-dimensional torus. This approximate system will be accurate on time intervals up to 1 /.

    2.3.3 Multiple Frequency Averaging and Resonance

    The presence of multiple frequencies complicates both the behavior and analysis in any system. With respect toperturbation methods and averaging, multiple frequencies opens up many challenges, some of which are currentlyopen problems. One of the easiest ways to illustrate the complications to analysis is to consider a single degree of

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    20/94

    12 2 General Concepts

    freedom oscillator (say a simple model of a bridge) forced by a single frequency source (people walking across thebridge for example). A simple model of the bridge is second order which will have a frequency response that has aresonant peak. The frequency response function for this system contains a numerator and denominator and at thispeak, when the forcing frequency is resonant with the natural frequency, the denominator goes to zero. Physicallythis means that if people walking over the bridge walk at this resonant frequency, the response of the bridge will goto infinity and collapse (this is why the military suggests soldiers to break step when walking over bridges!).

    In systems with more than two frequencies, or time varying frequencies, the complexity due to small denominatorsis compounded. The resonance between frequencies will either alter or destroy adiabatic invariants. In the case wherea pair or more of frequencies are commensurate in a m-frequency system, the trajectories of the unperturbed motionfill a torus with a dimension smaller than m so averaging over Tm in equation 2.30 will not capture the dynamicscorrectly. The previously dense set of angle trajectories falls onto a smaller set (sub-torus) in the resonant case.Another way of thinking about this is that in resonance, non-oscillatory terms appear and averaging out thesedynamics leads to erroneous results. The general idea of how to deal with resonance during averaging is to make acanonical change of variables onto to a coordinate system that rotates along with the resonant frequency and thencapture only the slow variation around this frame of reference.

    In the case where only some of the frequencies are resonant the method of partial averaging Arnold [1988] isemployed. In partial averaging, the vector of angles is parsed into resonant and non-resonant angles ( = {,}).The averaged system is found by averaging over leaving a description with the semi-fast variables which varyslowly in resonance zones and rapidly elsewhere. The averaged Hamiltonian becomes

    H(J) = 1(2)P

    20

    20

    H(J,, 0)d1 . . . d i . . . d P (2.31)

    where is a P-dimensional vector of non-resonant angles. In many actual physical systems since frequency andamplitude of oscillation are interdependent, the parsing into resonant and non-resonant angles is not a static process.That is, due to the varying frequencies, passage through resonance may occur and the relative dimensions of and vary in time. In this situation, the parsing procedure as described above is performed along a trajectory. Anotheradvanced topic in averaging is performing averaging sequentially to break down a system by parts. In this approachone would average over fast variables and then start again and consider the previously slow variables as fast variablesand average again. This is called secondary averaging, and results in a hierarchy of adiabatic invariants.

    2.3.4 Averaging Performed in this Thesis

    We will use averaging extensively in the work that is presented here. Our high dimensional system is composed ofstrong linear dynamics with an added nonlinear perturbation. The strong linear dynamics contain many frequencieswhich are not resonant. However, when the nonlinearity is turned on by passing through certain portions of statespace, these frequencies change in time and resonances occur. We find that formal averaging leads to useful resultswhile we do not use this technique to attempt to fully capture the dynamics. We perform partial averaging over regionsoutside of resonance and find that the reduced order dynamical system resulting in this procedure illuminates keyfeatures of the transition behaviors of this system. Further details about this this procedure and what we mean bypartial averaging will be presented in context when the applications are discussed.

    2.4 Geometric Numerical Integration

    Although both Runge-Kutta and linear multistep methods are well developed in the field of numerical integration,physical problems that deal with long term behavior (i.e. molecular, astronomical dynamics etc.) have motivated thedevelopment of new types of integrators that preserve structure in the dynamics which make them better candidatesfor these types of problems. Geometric Numerical Integration (GNI) is a class of numerical integration techniquesthat achieves this by explicitly considering the geometry of the dynamics when performing the time discretization.This is important for Hamiltonian systems which conserve certain quantities such as energy, angular momentum,state space volume, and other symmetries. To preserve the Hamiltonian structure, the discretization in time, whichis simply a mapping, needs to be symplectic. That is, the mapping t : (q(0), p(0)) (q(t), p(t)) must be a canonicaltransformation (see Section 2.2). In this section we present a few numerical methods which adhere to this constraintand numerically contrast them to illustrate the performance of GNI methods on a relatively simplistic model.

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    21/94

    2.4 Geometric Numerical Integration 13

    2.4.1 Symplectic Integrators

    Since volume preservation is a characteristic of Hamiltonian systems, it was natural to find numerical methodsthat share this property. There are many approaches to developing these methods including using a generatingfunction (as in Section 2.2.1), Runge Kutta methods, and variational approaches (which are good for PDEs) see(Sanz-Serna and Calvo [1994], Hairer et al. [2002], Scovel [1989], and Channel and Neri [1996]). Below we present

    three methods derived from these approaches. We specifically focus on a suite of methods that are available in aMatlab package written by Ernst Hairer (www.unige.ch/~hairer/) because they are easy to implement (althoughslow compared to the c-code that was written to obtain most of the results in this thesis).

    Symplectic Euler Method: The most fundamental symplectic integrator is the Euler method and a 1 st orderscheme can be represented as

    pn+1 = pn hHq(pn, qn+1), qn+1 = qn + hHp(pn, qn+1)where Hq is the derivative of the Hamiltonian with respect to q (or p respectively). This method is implicit in generaland explicit if the Hamiltonian is separable. Since this method is rather simple we can illustrate its conservationproperties using a simple pendulum model. The Hamiltonian and equations of motion for the pendulum are:

    H(p,q) = 12

    p2 cos q, dqdt

    = p,dp

    dt= sin q. (2.32)

    First we define the discrete flow: h : (pn, qn) (pn+1, qn+1). Using an explicit Euler method on this set of equationswe have:

    qk+1 = qk + hpk, pk+1 = pk h sin qk (2.33)where k is the step number, h is the stepsize, and the right hand sides in system 2.32 have been used for the functionevaluations. To check area preservation, we calculate the determinant:

    deth

    (qk, pk)=

    1 hh cos qk 1 = 1 + h2 cos qk = 1. (2.34)

    Since the determinant is nonzero, area preservation is lost.If we now move to symplectic Euler method, we have both the discretized equations and determinant as:

    qk+1 = qk + hpk, pk+1 = pk h sin qk+1, det h(qk, pk) = 1 (2.35)

    since the determinant is equal to one, we have area preservation which illustrates that this method is symplectic.Partitioned Runge-Kutta (RK) Methods: Similar to the method above, an appropriately designed Runge

    Kutta method can be symplectic as well. That is, although the explicit Runge Kutta method cannot be symplectica properly implemented implicit Runge Kutta method may be. The partitioned Runge Kutta method treats thegeneralized coordinate differently than the momenta. The method and its coefficients for the discretization are:

    ki = f

    qn + h s

    j=1

    aijkj , pn + hsj=1

    aijlj

    (2.36)

    li = g

    qn + h

    sj=1 a

    ijkj , pn + h

    sj=1 a

    ijlj

    (2.37)

    qn+1 = qn + hsi=1

    biki (2.38)

    pn+1 = pn + h2i=1

    bili (2.39)

    where s is the number of stages and the as and bs are coefficients that define the method. In the partitioningprocess, the system is solved by using two methods simultaneously. Each method will have a different coefficient list,

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    22/94

    14 2 General Concepts

    (aij , bi) for the first method, and (aij , bi) for the second. The necessary and sufficient conditions for a partitionedRK method to be symplectic are

    bibj = biaij + bjaij, for i, j = 1, . . . , s (2.40)

    bi = bi, for i = 1, . . . , s (2.41)

    Only (2.40) needs to be satisfied if the Hamiltonian is separable. Specific coefficients for methods with a differentnumber of stages can be obtained in literature.

    Stormer-Verlet Method: (2nd order) The Stormer-Verlet Method is a symmetric and symplectic 2nd ordermethod. It is a combination of a half-step of a partitioned Euler method (explicit in q, implicit in p) and a half-stepof its adjoint (explicit in p, implicit in q). This is a popular integration method especially in the molecular dynamicscommunity while its drawback is that it is only 2nd order. The equations for the method are

    pn+ 12 = pn h2Hq(pn+ 12 , qn)

    qn+ 12 = qn +h2Hq(pn, qn+ 12 )

    qn+1 = qn +h2

    Hp(pn+ 12 , qn) + Hp(pn+ 12 , qn+1)

    . (2.42)The Stormer-Verlet method is a particular case of a partitioned Runge-Kutta method and the coefficients can beconveniently placed in table form. See Ascher and Petzold [1998] for a description of coefficient tables for these types

    of methods.0 0 01 12

    12

    12

    12

    12

    12 0

    12

    12 012

    12

    . (2.43)

    Since these coefficients satisfy equation (2.40) and (2.41), the method is symplectic.Partitioned Linear Multistep Methods The linear multistep methods for geometric integration are extensions

    of the original methods by Adams and Bashforth . It has been shown that there is no partitioned linear multistepalgorithm with an order greater than one which is also symplectic. However, it has also been shown that we can stillobtain a near conservation of energy by studying the so-called underlying one-step method of a partitioned linearmultistep method. This says that the dynamics of the linear multistep method (LMM) can be approximated by thoseof a one step method in certain cases.

    Composition Methods / Higher order methods A composition of one step methods can be used in series

    in order to increase the order of a method (of say the Verlet method for instance). That is, given a one step methodh, a composition of the methods become

    h = h1 h2 hs (2.44)where each hi is a symplectic one-step method. Because each one step method is symplectic the composition of allthe methods are symplectic as well. The challenge is to find parameters hi such to achieve a certain order. There arecertain procedures to accomplish this up to order 8 Yoshida [1990]. In this method, for a step integrator each stepis defined as

    qi+1 = qi + hciTp

    (pi) (2.45)

    pi+1 = pi hdi Uq

    (qi+1) (2.46)

    where (qi=1, pi=1) = (q(t), p(t)) and (q(t + h), p(t + h)) = (qi= , pi=). The coefficients ci, di may be found usingalgebraic relations. Up to 4th order these coefficients are found directly while for higher orders they are obtainednumerically. The 4th order Yoshida method was used (in C-code) for most of the final results in this work and forcompleteness, the coefficients for this method are below.

    c1 = c4 =1

    2(2 21/3) , c2 = c3 =1 21/3

    2(2 21/3) (2.47)

    d1 = d2 =1

    2 21/3 , d2 = 21/3

    2 21/3 , d4 = 0. (2.48)

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    23/94

    2.4 Geometric Numerical Integration 15

    2.4.2 Conservation of Energy with Symplectic Integrators

    A common misconception is that geometric integration methods conserve quantities exactly. This is not the case aswith any numerical scheme quantities are only as exact as numerical precision. With this in mind, the invariants ofHamiltonian systems are conserved with geometric integration within round off error. The key difference betweengeometric and non-geometric integration methods is that this error builds in time with non-geometric approaches

    while oscillating about zero for GNI approaches. For an rth

    order integrator, the global error in the Hamiltonian isHairer et al. [2002]

    H(qn, pn) H(q0, p0) = O(thr), General Method (2.49)H(qn, pn) H(q0, p0) = O(hr), Symplectic Method (2.50)

    This holds for extremely long times when the trajectories stay in a compact set. These conditions illustrate that for ageneric scheme, error in the Hamiltonian grows with time, while for a symplectic discretization, the error is boundedand small.

    2.4.3 Evaluation of Different Geometric Numerical Integration Methods

    In this section we outline the performance of both traditional and geometric integration methods on a small test

    problem. We study six different methods; we experiment with four algorithms from the GNI/Matlab package (down-loadable at http://www.unige.ch/~hairer/), the standard Matlab RK45 routine, and a hand-coded GNI method(Yoshida [1990]) implemented in the C-language. The GNI-Hairer Package implements three different fixed stepsizehigh order geometric integrators based on the Implicit Runge-Kutta Method (irk), the Partitioned Multistep Method(lmm), and a Composition Method (comp). These implementations are summarized below.

    Partitioned Implicit Runge-Kutta Method (gni irk2) This is an implementation of Gauss methods, builton the n-step Gaussian quadrature with associated Legendre Polynomials. This is an implicit method of order2n, which is symplectic and symmetric. The options for the method can be set to values G4, G8, and G12,indicating 4th, 8th, and 12th order Gauss methods respectively. The default selection is G12. The parameterMaxIter specifies the maximum number of fixed-point iterations performed at every integration step to solve thenon-linear system of equations. The default value is 50.

    Partitioned Multistep Method (gni lmm2) The formula for partitioned multistep methods for second orderdifferential equations is given by

    kj=0 Ajqn+j = h

    2kj=0 Bjg(qn+j) which is similar to the standard formula-tion. To obtain the coefficients that uniquely determine the method, the and polynomials are solved (see

    Ascher and Petzold [1998]). The gni irk2 algorithm with Method set to G12 is used to provide starting approxi-mations. This method is of order 8 and therefore cannot be symplectic but it is symmetric and nearly symplectic(conserving energy). The Method selector can be set to values 801, 802, and 803 specifying different sets ofcoefficients, all of them yielding 8th order methods.

    Composition Method (gni comp) Given a basic one step method h with different step sizes h, the compo-sition method is given by a series of these: h = sh . . . 2h 1h. The composition method is symplectic ifthe basic method is symplectic, and it is symmetric if the basic method is symmetric. A composition method ischaracterized by the set of coefficients and the basic method itself, the default basic method for this implemen-tation is the Stormer/Verlet method. Users are allowed to specify a Matlab file defining their own basic methodthrough the selector PartialFlow. The Method selector for this implementation specifies different predefined setsof coefficients and can be set to the values 21, 43, 45, 67, 69, 815, 817, 1033. The first part of the value specifies

    the order, i.e. these methods have order 2, 4, 6, 8, and 10 respectively. The second number specifies the numberof coefficients or compositions. Using the option 21 is the special case that gives the basic one step Verlet methoditself.

    Test Problem The Kepler problem is one of a few standard test problems for geometric integrators (Hairer andothers use it to profile integrator performance). It is a two-body problem with Hamiltonian:

    H = 12

    p21 +p

    22

    1q21 + q

    22

    (2.51)

    This results in the equations of motion:

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    24/94

    16 2 General Concepts

    q1 = p1, q2 = p2, p1 =q1

    (q21 + q22)

    32

    , p2 =q2

    (q21 + q22)

    32

    (2.52)

    We will perform initial value simulations with initial conditions p1(0) = 0, p2(0) =

    ((1 + e)(1 e)1), q1(0) =(1 e), q2(0) = 0 with the initial condition parameter e = 0.6 for 5000 time steps. These conditions are standardand are also used by Hairer. Two different test cases were performed, Test A has a stepsize of 0.01 for all fixed step

    solvers (all of the GNI methods) and a relative/absolute tolerance of 1.0 105

    /1.0 108

    for the RK45 method,while Test B has a stepsize of 0.005, and tolerances of 1.0 106/1.0 109.

    The error in constant energy is presented in Figure 2.2 for both test cases. As we can see, all of the GNI methodshave error in energy estimates while having no secular terms. On the other hand the RK45 method has error inenergy that is increasing with time.

    0 1000 2000 3000 4000 500010

    18

    1016

    1014

    1012

    1010

    108

    106

    104

    102

    Time

    ErrorinConservedEnergy

    RK

    IRK

    LMM

    Comp

    Verlet

    Yoshida

    0 1000 2000 3000 4000 500010

    18

    1016

    1014

    1012

    1010

    108

    106

    104

    102

    Time

    ErrorinConservedEnergy

    RK

    IRK

    LMM

    Comp

    Verlet

    Yoshida

    Fig. 2.2. Energy error for different GNI methods (test A & B)

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2IRK

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2LMM

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2COMP

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2Verl

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2RK45

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2Yoshida

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2IRK

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2LMM

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2COMP

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2Verl

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2RK45

    2 1.5 1 0.5 0 0.5 12

    1

    0

    1

    2Yoshida

    Fig. 2.3. State space for different GNI methods (test A & B)

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    25/94

    2.5 Molecular Simulation 17

    Below is a rough comparison of CPU times for each of the experiments. It should be noted that the Yoshida methodwas performed in C-code which biases the comparison slightly because it is much more efficient for calculations likethis. Also note that the RK45 method is indeed a variable step method which increases its CPU efficiency.

    Table 2.1. CPU time for entire simulation using different GNI methods

    IRK LMM Comp Verlet RK45 YoshidaCase A 338 199 822 206 18 14.2Case B 970 1919 7 00 744 24 26

    2.4.4 Methods used in this Study

    The tests above are just a brief introduction to the performance of numerical integrators on Hamiltonian problems.A more in-depth study can be found in duChene et al. [2006] where these algorithms were studied on the Keplerproblem as well as a larger system (with Morse oscillators). In this study, not only CPU time, but function evaluationsvs. error was investigated. It was found that for the precision we desired, none of the standard Matlab integrators

    were effective. Of the GNI-Hairer integrators, the composition method was most useful. However, because of theinefficiencies, for most of the numerical work here we use C-code with an implementation of the Yoshida methodwhich we mentioned above (which is a composition method).

    2.5 Molecular Simulation

    Most of the numerical work that has been performed on systems of N masses has been in the molecular dynamics fieldand because of this we briefly discuss the basics of molecular simulation here. One of the questions that molecularmodeling often seeks to answer is to identify stable conformations of long chains of molecules. In order to accomplishthis, force fields and system parameters are chosen and the mechanics of this system are optimized in some way tofind a stable equilibrium/energy minimum. There are many available software tools for this type of analysis which arelisted in Ramachandran et al. [2008] or Schlick [2002] or many other references. However, with the advance of highlyefficient computational engines, the idea of tracking trajectories on the atomic level is becoming more of a reality.There are also free numerical packages for this purpose available on the web. One of the functions that this softwareneeds to be able to accomplish is ensemble simulation (i.e. constant pressure, temperature, number of molecules forinstance). There are a few approaches to accomplish this and one of which (thermostatting) gave us poor results. Toavoid future pitfalls in simulation we discuss this approach and its alternative below.

    2.5.1 Nose-Hoover Thermostats

    The Nose thermostat is a structural form of a molecular dynamic system model which is derived to ensure ensembleconstraints are met during simulation. To generate heat (or nonzero temperature) in typical molecular dynamicssimulation, a positive damping constant is introduced either in the Langevin setting or as a general drag term. Inthe Nose formulation however, the friction constant is not pinned to be positive or negative, it varies in time with

    a time average of zero at equilibrium. As it was originally derived, the friction term becomes a differential controlvariable to the equations of motion Hoover [1986]. In this way, proportional control scheme is implemented where thedamping is chosen to drive the error between the desired temperature (or even pressure) and a setpoint to zero. Itwas Nose that introduced the idea of integral control to accomplish ensemble modeling (i.e. constant EVN, or VTP,etc., see Nose [1984a] , and Nose [1984b]). In this way the damping is chosen proportional to the time integral of theerror. Fortunately, this approach provides an equation system which is time reversible (unlike in the other previouscontrol approaches) and can be connected to Gibbs statistics easily.

    In general, the Nose approach to constant temperature molecular simulation adds an additional degree of freedomto the system (an oscillator). The natural frequency of this oscillator is tuned such that it behaves in a limit cycleand this oscillatory response excites the N masses that are being simulated to generate the needed kinetic energy to

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    26/94

    18 2 General Concepts

    achieve the correct temperature distribution. Hoover was accredited with associating Noses formulation with Gaussprinciple of least constraint and to scale time differently which improved the applicability of the method.

    One of the basic assumptions in the thermostat formulation is that the dynamics are ergodic. This may be thecase when the system of particles is very large but for smaller systems, the Nose formulation will not give the correctequilibrium distributions. To overcome this, extensions from the single thermostat to a chain of thermostats werederived Martyna et al. [1992]. In this case, a chain of oscillators are designed while only one of the oscillators directly

    forces the system of particles. With the addition of multiple oscillators in the thermostat, more frequencies areavailable for tuning and therefore the correct distribution may be achieved.

    The Nose-Hoover thermostat produces canonically distributed configurations, but the addition of the thermostat-ting variables breaks the Hamiltonian structure of the system. Recently, work has been performed to preserve theHamiltonian structure in this process Bond et al. [1999]. Although the development of thermostatting dynamics hascome a long way, this approach is not well suited for our type of system. Even though there are tuning methodsof the thermostat oscillators even in the most advanced methods (Leimkuhler and Sweet [2005] for example) thetransitions that we experience take the dynamics through large variations in the state space. It is hard to imaginethat the thermostats will function the same when the system is at the bottom of a potential well or at the summitbetween these two wells. In fact, we have found that these methods numerically do not perform very well in theseinstances and instead used the more traditional method of Langevin simulation to perform the stochastic simulation.

    2.5.2 Langevin Simulation

    A second approach to molecular simulation is to simply numerically integrate the Langevin equation (we will discussthe Langevin equation at length in Chapter 3). We use this method because of the issues that arise from thethermostat approach that we discussed above. In order to accomplish this, there are many fancy ways to integratethe stochastic ODE. Some of these methods address numerical efficiency including how the random numbers aregenerated. However, in our case, our numerical experiments are not that expensive and we choose to use a verysimple 4th order integration scheme. We do place care in making sure that the random influence is scaled properlyto ensure that the fluctuation dissipation theorem holds in our simulation.

    2.6 Internal Resonance

    A ubiquitous behavior in high dimensional nonlinear systems is internal energy transfer. This includes transfer ofenergy between different entities (elements, masses, modes, etc.). In our array of oscillators, resonance permits energytransfer between individual oscillators or global modes. In oscillatory systems, energy transfer is most effective atresonance. For example, in a forced linear oscillator, maximum energy transfer will occur when the forcing frequencyis tuned to the natural frequency of the unforced system. As the two frequencies become mistuned, the energytransfer becomes periodic with a period dependent on the separation in frequencies (beating). If the same systemis autonomous (unforced), internal resonance occurs when internal frequencies are resonant (or commensurate) andtherefore energy transfer is built into the underlying function of the system.

    In nonlinear systems, where frequencies change within different zones in the state space (with the amplitude ofoscillation), the system may fall in and out of resonance as time evolves. Since frequency and amplitude are inter-dependent, once energy is transferred (and amplitudes change) frequencies that were once resonant may becomenon-resonant which will halt any further energy transfer leaving what had been transferred in a sink. This process istermed energy pumping or funneling and is becoming a popular replacement for linear for passive vibration control

    (see Larsen et al. [2004],Panagopoulos et al. [2007], Kerschen et al. [2007], and Vakakis et al. [2004]).

    2.6.1 Identification of Internal Resonance

    Knowing or identifying that internal resonance may occur is an important part of system analysis. The resonancecondition for a system with M frequencies {} is (see Arnold [1988]):

    |(, )| < 1c||v (2.53)

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    27/94

    2.7 Dynamical Systems on Graphs 19

    where (, ) = 00+11+ +M1M1, and i are any integers while c and v are positive constants. When theintegers i are chosen appropriately, the quantity on the left hand side of the inequality goes to zero when frequenciesare rationally commensurate (exact resonance) and the term on the right hand side accounts for resonance in smallregions where the frequencies are almost commensurate. Characteristics of the resonance zone both in the size of theregion in state space and time spent inside this region during evolution are related to this value.

    2.6.2 Resonance in our System

    Identifying whether resonance occurs becomes challenging when studying nonlinear systems. In linear fixed frequencysystems, the condition (2.53) is evaluated once and the possibility of resonance is determined for all time. However, fornonlinear systems, with changing frequencies this inequality may be satisfied only at specific times in the evolution.That is, we will show that the ith frequency for our nonlinear system is i = i + fi(J,, ) which varies withtime. In our system, the vector containing is is not commensurate, and so resonance only occurs with nonzero. That is, the term is a tuning knob that promotes internal resonance, and thus internal energy transfer whichleads to efficient conformation change. This concept is important and will be explored with numerical simulation andaveraging in the later chapters of this thesis.

    2.7 Dynamical Systems on Graphs

    As we mentioned above, internal resonance opens pathways for energy transfer but we made no comment on thedirection in which energy transfers. Clearly, the directional characteristics of energy transfer are important as asubset of oscillators or modes in a system may be more sensitive or important than others. Graph theoretic toolsallow you to gain insight into this directional behavior.

    The analysis of dynamical systems on graphs is useful for shortest path analysis, numerical analysis and compu-tation, model reduction, and to gain other useful insight into the underlying structure (see Dellnitz et al. [2006] andVarigonda et al. [2004]). In these methods, a seemingly disorganized high dimensional system may be decomposedinto a series of sequential functional components. One such method is the Horizontal-Vertical Decomposition (HVD)method, developed in Mezic [2004]. This method isolates modules which may contain one or more highly coupleddynamic states from others (the horizontal class of elements). These modules are then ordered in a way in whichthey unidirectionally interact with other modules (the vertical structure). Using the HVD method, one can easily

    observe how the flow of a process is affected by the removal of one key module. In this way, understanding thesestrongly coupled components is key to quantifying the robustness (weak links) of a large system.

    Rearranging a system into its Horizontal-Vertical components was further automated in Lan and Mezic [2009].This study focused predominately on cellular networks wherein series of modules or motifs dominate the dynamicalbehavior. Unfortunately the dimension of the system makes stability or input-output analysis typically out of reachbecause of computational burden. Here, the HVD structure is found and special attention is paid towards thedirectionality of coupling between different modules. This system is parsed into forward only modules which areessential for the process of the cellular network as they are used as generators. On top of this are a series of feedbacksthat regulate what is generated to the desired level of output. A schematic representing a dynamical system beforeand after Horizontal-Vertical Decomposition is presented in Figure 2.4 (note that this is a generic example and thatthe dotted arrows were deemed weak interconnections and thus ignored in the simplified schematic).

    In order to analyze the topology of systems using these methods, an adjacency matrix is calculated to denoteinteractions. This matrix contains nonzero elements when edges in the interconnection graph appear (when compo-

    nents are coupled) and zeros when no interaction is present. This interconnection topology and subsequent matrixwhich holds the same information offers insight into system stability, periodic orbits, uniqueness of equilibria andother system-wide concepts.

    In the work that follows, we provide no development of graph tools for dynamical systems. What we do reportis their use on coupled oscillator systems going through transition processes. In Varigonda et al. [2004] it was foundthat the HVD structure can be discovered in the dynamics by appropriate re-ordering of state variables. On the otherhand, in this work we find that the HVD structure dominates with no modification of the structure of the system.These results are found by linearizing the action-angle dynamics along a trajectory and using the action dynamicsas the interacting modules.

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    28/94

    20 2 General Concepts

    Fig. 2.4. Example of the graphical structure of a dynamical system before (left) and after (right) simplification and horizontal-vertical decomposition (HVD)

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    29/94

    3

    Stochastic Dynamics, the Langevin Equation, and Escape Rate Theory

    3.1 Introduction

    This chapter outlines concepts in the dynamical behavior of particles under stochastic environmental influence.Although most of the theory was either founded or is otherwise associated with Brownian motion of particles, most if

    not all of it can be extended to stochastic behavior of general inertial masses (i.e. engineered systems). The ultimategoal of this chapter is to introduce tools for the analysis of transitions in large systems of particles (escape ratetheory). For completeness, the rest of this chapter presents the supporting topics for this theory including the basicsof conservative particle mechanics and equilibrium and non-equilibrium statistical mechanics.

    There are many good textbooks which outline the basics (and details) of stochastic mechanics. For the equilibriumcase, Dill and Bromberg [2003] provides an excellent introduction, and McQuarrie [2000] and Chandler [1987] offermore details. When it comes to non-equilibrium statistical mechanics, a good introduction in very brief but readableform is Balakrishnan [2008], and a brief but more in-depth review can be found in Zwanzig [2001]. The book byVanKampen VanKampen [2007] is a classic and contains a lot of information. When it comes to only the Fokker-Planck equation, Risken [1984] is an essential resource.

    3.2 Conservative Dynamics

    Stochastic mechanics is the theory of time evolution of inertial objects with random influence. It is built on aspectsof deterministic mechanics (specifically the Liouville equation). This will be useful later as a comparison for Langevindynamics (the Liouville equation is to Hamilton dynamics as the Fokker-Planck equation is to Langevin dynamics).Since stochastic mechanics is derived mostly in the realm of particle dynamics, the dimension of the system is veryhigh (O(1023)) and to get any traction we typically study the propagation of densities in state space rather than theindividual trajectories of particles. In the special case of conservative systems this evolution is such that the principleof density and volume conservation holds. The density () evolution is such that

    (q(0), p(0), 0) = (q(t), p(t), t) (3.1)

    where q is a vector of generalized coordinates and p is momenta. That is, for a conservative system, as the densitiesevolve there is no tendency to crowd in any portion of state space. Similarly, volume conservation is such that

    dq(0)dp(0) = dq(t)dp(t). (3.2)

    Furthermore, probability is conserved (a particle must be somewhere at any time). Letting = (q, p) be the entirestate space

    d (, t) = 1. (3.3)

    When an invariant like this holds, other conservation laws may be derived. One such law is the continuity equation,and to obtain it, we study a number of particles within a given volume V. The number of points n within this volumein state space and its time derivative can be written as

  • 8/3/2019 Bryan Eisenhower- Targeted Escape in Large Oscillator Networks

    30/94

    22 3 Stochastic Dynamics, the Langevin Equation, and Escape Rate Theory

    n = NV

    d (, t) (3.4)

    dn

    dt= N

    V

    d

    t(3.5)

    where N is a normalization factor. We can arrive at the same equation by considering the flux through a surface andusing Gauss theorem to obtain dn

    dt= N

    V

    d (u) (3.6)

    where u is a 6N dimensional vector flow vector (velocities and accelerations). Subtracting 3.5 from 3.6 we have thecontinuity equation

    t+ (u) = 0 (3.7)

    where

    (u) =3Nj=1

    qjqj +

    pjpj

    +

    3Nj=1

    qjqj

    +pjpj

    . (3.8)

    In the context of Hamiltonian mechanics ( pi = Hqi , qi = Hpi ) the second term goes to zero and equation 3.7becomes

    t+

    3Nj=1

    H

    pj

    qj H

    qj

    pj

    (3.9)

    If we rewrite this equation in terms of the Poisson bracket

    {, H} =3Nj=1

    H

    pj

    qj H

    qj

    pj

    (3.10)

    We have the Liouville equation

    t= {, H} = L (3.11)

    where L is the Liouville operator. This operator has many convenient properties and 3.11 is one of the fundamentalequations in statistical mechanics, the full solution in time can be written as

    (, t) = eLt(, 0). (3.12)

    We can derive the response function and susceptibility from the Liouville operator (non-equilibrium statistical me-chanics) which is the analog to the partition function in equilibrium statistical mechanics. The Liouville operator isused with many other analysis tools in non-equilibrium statistical mechanics including linear response theory.

    3.3 Equilibrium Statistical Mechanics and Boltzmann Statistics

    The origins of equilibrium statistical mechanics began with the theoretical explanation of heat at the molecular leveland the random Brownian motion that creates it. In theoretical terms, Maxwell sought probability distributions forthe random velocities of gases in a fixed volume in equilibrium. Boltzmann generalized Maxwells results to systemswith conservative force fields and total energy E() =