Jds 000001

Embed Size (px)

Citation preview

  • 7/31/2019 Jds 000001

    1/13

    Bei Gu

    Student Member ASME

    H. Harry AsadaFellow ASME

    dArbeloff Laboratory for Information Systems

    and Technology,

    Department of Mechanical Engineering,

    Massachusetts Institute of Technology,

    Co-Simulation of AlgebraicallyCoupled Dynamic SubsystemsWithout Disclosure of ProprietarySubsystem Models

    A method for simultaneously running a collection of dynamic simulators coupled byalgebraic boundary conditions is presented. Dynamic interactions between subsystems

    are simulated without disclosing proprietary information about the subsystem models, asall the computations are performed based on input-output numerical data of encapsulated

    subsystem simulators coded by independent groups. First, this paper describes a system ofinteracting subsystems with a causal conflict as a high-index, Differential-Algebraic

    Equation (DAE), and develops a systematic solution method using Discrete-Time SlidingMode control. Stability and convergence conditions as well as error bounds are ana lyzed

    by using nonlinear control theory. Second, the algorithm is modified such that the sub-system simulator does not have to disclose its internal model and state variables for

    solving the overall DAE. The new algorithm is developed based on the generalized Kirch-

    hoff Laws that allow us to represent algebraic boundary constraints as linear equations ofthe subsystems outputs interacting to each other. Third, a multi-rate algorithm is devel-

    oped for improving efficiency, accuracy, and convergence characteristics. Numerical ex-

    amples verify the major theoretical results and illustrate features of the proposedmethod. DOI: 10.1115/1.1648307

    1 Introduction

    The role of simulators is not only to facilitate engineering

    analysis and design but also to provide a powerful means with

    which engineers communicate to each other. Engineers now com-

    municate by exchanging simulators representing the whole behav-

    ior of the components and systems that they have developed 1.Two examples illustrate the new role and utility of simulators:

    In the automobile industry, for example, car makers request

    that suppliers provide simulators of supply parts, and evaluate

    them by connecting the supply part simulators to the engine simu-

    lator and body simulator of the automobile. In turn, carmakers

    provide the suppliers with simulators depicting the conditions of

    the automobile system so that the supplier can develop the right

    parts to meet the specifications. Todays manufacturer can com-

    municate with thousands of suppliers through a supply chain man-

    agement system over the Internet.

    In the air conditioner industry, former competitors are now

    forming alliances to use common components and integrate their

    products. Simulators representing detailed behavior of individual

    units are exchanged to streamline communications among engi-

    neers of alliance partners, thus allowing them to complete thor-

    ough engineering analysis and product development in a limited

    time.

    The role of simulation as a tool of engineering communication hasthe potential to grow as more vendors, suppliers, and alliance

    partners are integrated over the global network.

    Simulation technology is a vital communication tool in the era

    of alliances and partnerships as well as supply chain management.

    Undoubtedly, renewed features and functionality are now needed

    to keep up with this expected growth. A critical problem in ex-

    changing simulators is how to combine various simulators codedby different groups. For instance, automobile makers must inte-grate numerous simulators that have been developed and codedseparately by diverse suppliers. The software aspect of this prob-

    lem has been a focal point in the research community, and manysoftware environments that encapsulate individual simulators andmake them portable have been developed. CORBA, for example,has been used in the DOME Distributed, Object-oriented, Mod-eling Environment project and others 2.

    The modeling aspect of this problem is still unresolved: The

    problem of simulating dynamic interactions among physical sys-tems is more complicated due to the bi-directional nature of physi-

    cal interactions. One subsystem affecting another subsystem isalso influenced by the counteraction from the other. These inter-actions often create conflicting boundary conditions that constrain

    both sides of the interacting sub-simulators. Furthermore, theseconstraints cannot be resolved algebraically, especially in caseswhere nonlinear elements are involved. As a result, the total simu-lation model becomes a set of Differential-Algebraic Equations

    DAEs. Although DAE models are generally difficult to compute,several DAE solution codes, such as DASSL, RADAU5, etc.,have been developed 3,4. The code DASSL has widely beenused for solving DAEs 3. Object-oriented modeling languages,e.g., Dymola, Omola, Modelica, etc. have adopted these DAEsolvers to deal with large coupled dynamic systems 5 8.

    These DAE solvers and object-oriented modeling languages,however, do not meet the functional requirements for the type ofsimulation environment that would allow alliance partners andsuppliers to communicate by exchanging simulators:

    The simulators developed by suppliers and alliance partnersare often proprietary and hence the actual model and detailedstructure of the simulator cannot be disclosed to others. In the airconditioner industry, models of key components, e.g., compres-sors, are the most important intellectual properties. Car makersmust be able to test out the performance of supply parts in diverseconditions, and the suppliers must provide enough information toconvince the car maker, but the complete supply part models and

    Contributed by the Dynamic Systems, Measurement, and Control Division of T HE

    AMERICAN SOCIETY OF MECHANICAL ENGINEERS for publication in the ASME

    JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL. Manuscript

    received by the ASME Dynamic Systems and Control Division March 2002; final

    revision July 2003. Associate Editor: Y. Hurmazlu.

    Journal of Dynamic Systems, Measurement, and Control MARCH 2004, Vol. 126 1Copyright 2004 by ASME

  • 7/31/2019 Jds 000001

    2/13

    model structures cannot be disclosed in many situations. It istherefore essential that multiple simulators, although algebraicallycoupled and forming a DAE, can be co-run without disclosure ofthe individual models and structures. The current DAE solvers,however, need to know each subsystems dynamic equations inorder to set up the solver, and hence the model cannot be keptconfidential.

    The current DAE solvers do not apply to a collection of simu-lators independently coded. Simulators coded by different partiesmust be substantially changed or even completely recoded in or-der to use those DAE solvers. Therefore, the current DAE solvers

    are inappropriate for exchanging and combining existing sub-system simulators.

    In an attempt to allow different parties to communicate by ex-changing subsystem simulators without disclosing proprietary in-formation, this paper will present a new simulation environment,termed as Co-Simulation environment. Co-Simulation Environ-ment is a software environment for simultaneously running a col-lection of subsystem simulators without revealing the subsystemsdynamic model. Subsystem simulators may be dynamicallycoupled with each other through boundary conditions representedas algebraic constraints. Communications with individual sub-system simulators are limited to input-output inquiries. Each sub-system simulator is required to respond to specific input data, butit does not have to disclose how the response is computed. Eachsubsystem is treated as a complete black box, and the internal

    state and model are kept confidential to other subsystems and theCo-Simulation Environment.

    The objective of this paper is to develop a computational algo-rithm to enable Co-Simulation. To this goal, we will first obtain anew DAE solver based on discrete-time sliding mode control, andthen modify the algorithm so that the internal model is not neededfor solving the DAE. The problem formulation, assumptions, andbackground information that are necessary before deriving thecomputational algorithm are presented in the following section.

    2 Formulating Co-Simulation Problems

    2.1 Combining Multiple Simulators That Dynamically In-teract. Consider a nonlinear dynamic system consisting of twointeracting subsystems A and B. The subsystems are representedby simulators A and B, which are coupled as shown in Fig. 1.Each subsystem simulator receives input u and produces output ywhile updating its internal state x. The objective is to simulta-neously run the two simulators in order to examine their coupledbehavior. In some instances, the two subsystems interact in such away that the sole effect of B on A is manifested in that all or partof subsystem As input uA is explicitly determined by subsystemBs output yB and vice versa. In this case, the coupled dynamicscan be simply simulated by feeding yB to uA and yA to uB . Eachsub-simulator can be viewed as a black box, and the coupledsimulation can be performed without knowing the internal statevariables and the associated dynamic model of the subsystem.

    The situation, however, is different when the two subsystemshave a causal conflict in input-output relationship. Consider a re-frigeration cycle illustrated in Fig. 2. The system consists of acompressor, a condenser, two sets of a series combination of

    evaporator and expansion valve, and an accumulator, along withpipes connecting those components. Refrigerant is first pressur-ized at the compressor, liquidized in the condenser, and branchedout to the two indoor units installed in different rooms. The outlets

    of the evaporators are merged, and the refrigerant is collected atthe accumulator. Interactions occur at this merging point, wheremass flow rates from the two evaporators, uA , uB , sum to themass flow rate in the pipe reaching the accumulator, m:

    uAuBm (1)

    Let z be an independent variable, called a boundary variable, in-troduced to rewrite the above condition as:

    uAz , uBmz . (2)

    When the two evaporators are placed in adjacent rooms connected

    by a short pipe, there is no tangible pressure difference betweenthe two evaporators outlets, denoted yA and yB . Namely,

    yAyB (3)

    In describing the dynamics of each evaporator the outlet massflow rate appears as a subset of the inputs and the outlet pressureappears as a subset of the outputs, and their input-output relation-ship cannot be reversed 9. Therefore, a conflict occurs whencombining the two evaporators, as shown in the figure. Both sub-systems provide outputs that must be the same, while the inputsmust conform to Eq. 2.

    This type of problem is often encountered when combiningmultiple subsystems. Subsystem inputs and outputs must conformto certain algebraic constrains, hence the total coupled system isdescribed by DAEs. Two robot arms connected at the endpoints

    and a four-bar-linkage system are classical examples of DAE. Ingeneral, energetically interacting dynamic subsystems can be de-scribed by a collection of dynamic state equations of the indi-vidual subsystems and a collection of algebraic equations repre-senting boundary conditions constraining the subsystems:

    xf x,u,t (4)

    0g x,u,t (5)

    where xRn is the overall state vector comprising subsystem state

    variables, e.g., xA and xB , and uRm is the overall input vector

    comprising uA , uB , etc. It should be noted that input vector ucomprises two kinds of inputs; one is boundary variables associ-ated with interactions among the subsystems, and the other isexogenous inputs from disturbance sources or control actions gen-

    erated in the subsystems. When simulation is performed, the lattertype inputs are given as functions of state variables or prescribedtime functions. Therefore, the functions on the right-hand side ofEqs. 4 and 5 are functions of x, z, and t.

    More than two subsystems may interact with each other, andthe interactions may include both causally conflicting and non-conflicting interactions. Thus, Eqs. 4 and 5 represent a generalsystem with diverse interactive structures and boundary condi-tions. In the following sections, however, we will consider thecase that involves only two conflicting subsystems subject to onealgebraic constraint with a single boundary variable, z. The pair ofindoor heat exchangers described above is an example of suchinteracting subsystems. Extensions to more than two coupled sub-Fig. 1 Interacting dynamic simulators with no causal conflict

    Fig. 2 Refrigeration cycle: an example of causal conflict

    2 Vol. 126, MARCH 2004 Transactions of the ASME

  • 7/31/2019 Jds 000001

    3/13

    systems, and to a mixture of conflicting and non-conflicting sub-systems will be discussed after the main theoretical results areobtained, as their description requires more complex notation. Fi-nally, we assume that the interacting subsystems satisfy general-ized Kirchhoffs Laws. As will be shown later in Section 7, thealgebraic constraint of such physical subsystems is given as alinear equation in terms of the outputs produced by the individualsubsystems. Therefore, for two conflicting subsystems, A and B, itis given by:

    0b 0bAyAbByB (6)

    We will exploit this linear property of algebraic constraints indeveloping a co-simulation algorithm.

    We summarize the assumptions made as follows:

    1. Only one algebraic constraint and one boundary variable areinvolved in each pair of conflicting subsystems;

    2. The algebraic constraint is a linear function of the outputvariables, and is sufficiently differentiable;

    3. The DAE system given by Eq. 4 subject to Eq. 5 andwith a consistent set of initial conditions i.e., initial condi-tions that satisfy the algebraic constraint has a well-definedsolution for x and z;

    4. The DAE system has a finite index r explained below andthe index does not change in a region of interest D

    (x,t,z) xRn,tR,zR;

    5. Subsystems are stable, and the time rate of change of eachstate variable is bounded.

    With regard to the last assumption, we further assume that eachsubsystem simulator can be run stably during co-simulation.

    2.2 DAE Index and Sliding Mode Control. If the alge-braic constraint equation explicitly includes boundary variable z,and the constraint equation is solvable for z, it is a straightforwardproblem to satisfy the constraint. In general, however, the alge-braic constraint does not explicitly include z, as in the case of thetwo robot arms and the refrigeration cycle. The constraint equa-tion is then described as:

    0g x,t (7)

    To relate boundary variable z to state variables x, it is necessary to

    differentiate the algebraic constraint along the state equation 4 asmany times as necessary until the boundary variable z shows up.This differentiation process gives rise to the notion of DAE indexr. The minimum number of times that the constraint equation 5must be differentiated with respect to time in order to solve for zas a continuous function of t, x, and z is the index of the DAEs 4and 5 3.

    Assuming that the DAE has an index r, by definition of theDAE index, the following set of algebraic equations must be sat-isfied by any exact solution of the DAE:

    0g x,t

    0dg x,t

    dt(8)

    0dr1

    d tr1g x,t

    DAE systems with index 2 or higher, for which all the derivativeconstraints in Eq. 8 must be satisfied along with the originalalgebraic constraint, are referred to as high index DAEs. Althoughhigh-index DAE systems are difficult to solve in general, they canbe reduced to an index-one DAE by using a compound constraintequation comprising a weighted sum of the original constraint andits derivatives along the state Eq. 4:

    s x,z ,t dd t1

    r1

    g0 (9)

    where 0. This new constraint equation requires only one dif-ferentiation to solve for z . To our knowledge, this index reductionmethod was first found in 10. There are also other ways ofreducing DAE index 11.

    The above constraint equation can be viewed as a sliding mani-fold in nonlinear control theory. Moreover a control algorithm todrive the system to the constraint surface, i.e. the sliding manifold,can be imported from the theory of sliding mode control. The

    adoption of this theory provides a rigorous proof of convergenceand a systematic way of confining the state within a tunable dis-tance from the sliding manifold 9,12. Equation 9 represents acritically damped dynamic system with (r1) modes having thesame time constant . If the sliding manifold equation is identi-cally satisfied, all modes of the sliding manifold subspace decayto zero asymptotically with time constant . The derivatives of theoriginal algebraic constraint as well as the weighted sum of theconstraint and the derivatives will remain zero all the time if theoriginal algebraic constraint Eq. 7 is satisfied at the initial time.Thus the original high index algebraic constraint can be replacedby the reduced index algebraic constraint Eq. 9.

    The direct application of continuous sliding mode control to theDAE solver, however, incurs difficulties in dealing with numericalcomputation. Instability due to discretization, chattering, and stiffdynamics are major roadblocks for the Singularly Perturbed Slid-ing Manifold SPSM approach 9. Discrete formulation is appro-priate for addressing these issues 13. In this paper the problemwill be reformulated as a discrete-time sliding mode DTSM con-trol, and a stable, completely numerical solution method will bedeveloped. Furthermore, the algorithm will be extended such thatsubsystem simulators do not have to disclose their proprietaryinternal models.

    3 Discrete-Time Sliding Mode Control

    3.1 Discrete-Time DAE. In converting the continuous-timeexpressions into discrete-time, the independent variable kt iswritten as tk , and the dependent variables evaluated at time ktare written with a subscript k: For example, the state variablex(kt) is written as xk and the boundary variable z(ktt) is

    written as z k1 . Also, all functions evaluated at (xk ,tk ,z k) arewritten with subscript k. For example, the sliding variable s de-fined in Eq. 9 is written simply as s k , when evaluated at(xk ,tk ,z k), unless it is necessary to manifest x, t and z.

    Use of this notation yields the following discrete-time high-index DAE system;

    xk1x ktf xk ,z k ,tk (10)

    0g xk1 ,tk1 (11)

    where f is an effective derivative corresponding to any explicitnumerical integration method. If Eulers forward method is used,the effective derivative will be just f(x,z ,t). We will use f instead

    of f in the rest of the paper for simplicity. Note that the algebraicconstraint Eq. 11 must be satisfied at the new state xk1 ,tk1 as

    well as at the current state before transition. Note also that thealgebraic constraint does not explicitly depend on the boundaryvariable, z, since we are interested in high index problems. UsingEq. 9, we define a sliding manifold to replace the original alge-braic constraint Eq. 11. The algebraically coupled dynamic sys-tem with reduced index in the numerical simulation form is givenby:

    xk1x ktf xk ,z k ,tk (12)

    s k10 (13)

    Note that the sliding variable is defined by Eq. 9 in continuoustime but is evaluated at ( xk1 ,tk1 ,z k1) in the above expression.

    Journal of Dynamic Systems, Measurement, and Control MARCH 2004, Vol. 126 3

  • 7/31/2019 Jds 000001

    4/13

    For this discrete-time system, the dynamic system is meaningfulonly at each sampling point and the constraint equation needs tobe satisfied only at the sampling point as well.

    We want to design a discrete-time controller that enforces thereduced constraint equation, Eq. 13, by driving the boundaryvariable z:

    z k1z kk (14)

    where k is the control input to be designed based upon outputvariables y ks from the subsystem simulators that are connectedthrough the algebraic constraint. In the above expression, the

    boundary variable z k is treated as a state variable of the controlsystem that coordinates a pair of conflicting sub-simulators. Thiscoordinating system, called Boundary Condition Coordinator orBCC, converts the original DAE into an approximate ODE byreplacing incompatible boundary conditions among sub-simulators by compatible ones with tunable dynamics. This is atype of realization approach to solving DAEs.

    For a continuous-time system, discontinuous control is requiredto make the dynamic system exhibit a sliding mode, i.e. for thetrajectories to perpetually point to the sliding manifold, s0.However, discontinuous control is not required for a discrete-timedynamic system to exhibit a sliding mode 14,15. Our goal is todesign a continuous control law for the above discrete-time DAEsystem given by Eqs. 12 and 13 so that the system may staywithin a tolerable distance of the sliding manifold.

    3.2 Control Law. From the assumptions, the sliding vari-able s is sufficiently differentiable with respect to x, t, and zaround the point s0. Using Eulers Forward Method again, s k1can be expanded as follows:

    s k1s kJxk xk1xkJtktJz kz k1z kR k(15)

    where

    Jxks

    x xk ,tk ,z k, Jtk

    s

    t xk ,tk ,z k, Jz k

    s

    z xk ,tk ,z k

    (16)

    and R k is the second-order remnant term. If we let w

    xT,t,z T, then R k is given by:

    R kwkT w sw

    T

    wkwk

    wk (17)

    where 01, according to the Mean Value Theorem. Substitut-ing Eqs. 12 and 14 into Eq. 15 yields,

    s k1s kkJz kkR k (18)

    where

    kJxkf xk ,z k ,tktJtkt (19)

    Terms k and R k can be interpreted as external influences or dis-turbances applied to the control system regulating the sliding vari-able, s. Term Jz kk is where control k can be applied to influ-ence s k1 . Note that Jz k is non-zero and invertible since theexistence of a well defined index, r, is assumed and hence the

    (r1)st derivative of function g contains a non-zero term in z.The objective of our control is to drive the sliding variable s k1

    to zero and keep it null for all time steps thereafter. In Eq. 18, sets k1 to zero and solve for k the input that drives the next timesteps sliding variable to zero. Unfortunately, Eq. 18, includesthe term R k that is unknown in general. Therefore, we ignore the

    Rk term for the time being, and we consider the following nominalcontrol:

    O kJz k1 s kk (20)

    Figure 3 conceptually illustrates the possible trajectories of thesliding variable s within the extended state space (x,z), when

    driven by the above nominal control. The goal is to push thetrajectory towards the s0 curve, ultimately letting it slide on thes0 curve once the trajectory enters the vicinity of the curve. Inother words, the goal is for the system to exhibit a sliding mode;The transition of state x is to be confined to an invariant set in thevicinity of the s0 curve.

    Due to the unaccounted effect of the R k term, the trajectorydoes not exactly reach the curve s0. The resulting error is pro-portional to the magnitude of Rk . We intend to reduce this errorby devising a more suitable control law, thus guaranteeing that thetrajectory stays within an invariant set satisfying acceptable errorbounds. To make this happen, the following requirements must bemet:

    It must be guaranteed that the trajectory stably converges tothe invariant set. The nominal control obtained above is a type ofNewton-Raphson algorithm, which is often unstable for highlynonlinear systems. Typically, the control magnitude must be lim-ited to ensure stability.

    From Eq. 17, the remnant term is apparently a quadraticform in x, t, and z , and it tends to zero as t and zapproach zero:

    lim R k

    t

    0z0

    0 (21)

    Note that x goes to zero as t approaches zero, since the timerate of change of state variables, x/t, is assumed to bebounded, as stated in Section 2.1. To reduce R, time step t mustbe reduced and control input kz k1z kz k must bebounded as well.

    On the other hand, the controller must be able to generatecontrol input k that is large enough to overcome disturbancesacting on the system, thus keeping the system within the invariantset. The exogenous input k acting on the system tends to deviatethe sliding variable. Furthermore the uncertainty due to the unac-counted remnant R tends to deviate the sliding variable. We mustguarantee that the system is kept within the invariant set despitethese disturbances and uncertainty.

    To meet the first two requirements, we limit the control input kusing a saturation function:

    k0 sat O k0

    (22)where the magnitude of control input is limited to the thresholdvalue 0 . Note that the threshold value must be chosen to be largeenough to meet the third requirement. We can derive the conditionfor the threshold value from the worst-case scenario. Consideringthe largest disturbances and uncertainty within the region of inter-est D, we will examine the following condition for the thresholdvalue:

    Fig. 3 Conceptual profiles of trajectories using discrete-timesliding mode control

    4 Vol. 126, MARCH 2004 Transactions of the ASME

  • 7/31/2019 Jds 000001

    5/13

    0supD

    Jz1supD

    tR t,0 (23)

    where the is a small positive constant and Jz1, (t) andR(t,0) are assumed to be upper bounded. If there existthreshold 0 and sampling interval t along with bounds for

    Jz 1, (t) and R(t,0) satisfying the condition given byEq. 23, a convergence proof can be given for the control lawdescribed by Eqs. 20 and 22.

    3.3 Proof.

    a. The Invariant Set. We want to show that the closed-loopsystem with the above control law possesses an invariant set de-fined by:

    ssupD

    R (24)

    in the vicinity of the s0 curve. For s k belonging to this set,taking the absolute value of the nominal input given by Eq. 20and comparing it with the threshold 0 yield

    O ksupD

    Jz k1sup

    D

    Rkk0 (25)

    Since this means that the nominal input for a trajectory startingfrom the region satisfying Eq. 24 is bounded by 0 , the actualcontrol input k is given by the nominal input:

    k0sat Ok

    0 O k (26)This input brings the next-step sliding variable to:

    s k1s kkJz kJz k1 s kk R kR k (27)

    Therefore, the next-step sliding variable remains in s k1supDR; This shows that Eq. 24 defines an invariant set.Next, we will show that the invariant set is attractive.

    b. Proof of Convergence. To prove that the invariant setaround s0 is attractive, define a candidate Lyapunov function:

    V tk ,s kVks k (28)

    This function V is positive definite and decrescent, since thereexist positive constants ab such that a s kVkb s k.

    Consider the case that the sliding variable s k is outside theinvariant set, and the nominal input is larger than the threshold0 :

    O k0supD

    Jz1 supD

    tR t,0 (29)

    The control input k is given by 0sign( O k) in accordance withEqs. 20 and 22. With this input, the next step sliding variable ismoved to:

    s k1 s kk 1 0O kR k (30)The forward difference of the Lyapunov function candidate,VkVk1Vk , is therefore obtained as:

    Vks k1s k s kk 1

    0

    O k Rks k (31)The forward difference function Vk can then be bounded by:

    Vks kk 1 0O k R ks k

    s kk0

    Jz k1R ks k

    kR k0

    Jz k1

    (32)

    From Eq. 29, this implies:

    VkkR k0

    Jz k10 (33)

    This shows that the absolute value of the sliding variable keepsdecreasing as long as the nominal input is larger than the threshold0 .

    Starting with a finite initial value, the magnitude of the slidingvariable s k becomes smaller than supDR in finite steps for afinite . When this happens, the nominal input becomes:

    O kJz k1

    s kkJz k1

    supRk0 (34)Therefore the input k is the same as the nominal input O k , whichbrings the sliding variable to

    s k1s kkJz kJz k1 s kk R kR k (35)

    Namely, the sliding variable is within the invariant set: s k1supDR. When the trajectory starts from a point outside theinvariant set but the nominal input is smaller than the threshold,the nominal input is then given to the system without goingthrough saturation function and, as shown in Eq. 35, the invari-ant set is reached in a single step. This concludes the proof onconvergence of the trajectory into the invariant set.

    4 Co-Simulation With Minimum Information Disclo-sure

    The DTSM method described in the previous section assumesfull knowledge of the interacting subsystems. Subsystem simula-tors have to supply state equations and output equations in orderto build the DTSM-based Boundary Condition Coordinator BCCprior to the start of the simulation. This requires the vendors of thesubsystem components to disclose all the key elements of themodel, information that is often proprietary in nature. The goal ofthis section is to modify the DTSM algorithm so that Co-Simulation of coupled subsystems may be performed without dis-closure of sub-simulator models. Required information from indi-vidual sub-simulators should be limited to input-output numericaldata instead of symbolic or mathematical expressions of the entiremodel. In other words, the Boundary Condition Coordinator

    should request each sub-simulator to simply return output valuesto the inputs specified by the BCC.

    In the DTSM algorithm, the BCC needs to compute the follow-ing nominal control:

    O kJz k1 s kk (20)

    kJxkf xk ,z k ,tktJtkt (19)

    The above expressions include state variables, and calculatingthese expressions require explicit knowledge of the state equationsof the neighboring subsystems and the algebraic constraints rep-resenting their interaction. The objective of this section is to en-capsulate all the computations involving state variables and stateequations within the subsystem simulator, and to let the BCCperform coordination control based solely on outputs from the

    subsystem simulators, so that the BCC does not explicitly needthe internal model and the state variables of the subsystems. Thereare three terms involved in the equivalent control given by Eq.

    20: s k , Jz k1, and k . The following two methods allow us to

    replace these terms by the ones computed from the outputs andtheir derivatives alone.

    1 Computation of Sliding Variable sk , Jacobian Jz k , andDAE Index r. As shown in Eq. 8, the sliding variable com-prises the derivatives of the constraint equation g (x,t). However,as shown in Eq. 6, the algebraic constraint is a linear combina-tion of output variables of the subsystem simulators. Namely,

    Journal of Dynamic Systems, Measurement, and Control MARCH 2004, Vol. 126 5

  • 7/31/2019 Jds 000001

    6/13

    s dd t1

    r1

    gb 0bA ddt1 r1

    yA

    bB ddt1 r1

    yB (36)

    Therefore, the sliding variable s can be computed simply from theoutputs of the subsystem and their time derivatives to the ( r

    1) st order. The actual model of each subsystem is not needed forcomputing s.

    According to the definition, the Jacobian Jz is the partial de-rivative of the sliding function with respect to the boundary vari-able z. This is a seemingly complex function comprising manyderivative terms of the constraint equations. However, examiningthe sliding variable definition, Eq. 9, and the definition of the

    DAE index reveals that only the ( r1) st derivative g (r1) is afunction of the boundary input z. In other words, according to thedefinition of index, the boundary variable z appears for the first

    time in the ( r1) st derivative. Other components of the slidingfunction are only functions of state x and time t, and thus do notcontribute to the partial derivative with respect to z. Therefore, inorder to numerically evaluate the Jacobian, Jz, the subsystem

    simulators only need to supply the numerical values of y (r1)

    computed at two values of the boundary variable z:

    Jzr1 bA yA

    r1

    z

    yAr1

    z

    2

    bByB

    r1 zyBr1z

    2 (37)

    where yA and yB are the outputs of subsystems A and B, respec-tively. This approach may result in additional computation andcommunication for the subsystem simulator, but no additional in-formation about the subsystem need be disclosed, which satisfiesour requirements.

    Prior to the above computations the DAE index r must be de-

    termined and the output derivatives up to the ( r1) st order mustbe made available for co-simulator. DAE index r is determined inthe following manner for subsystems, A and B, coupled by a linear

    algebraic constraint, g. Let qA be the number of times that theoutput yA must be differentiated in order to relate it to the bound-ary variable z, while qB be that of subsystem B. This derivativenumber is analogous to the relative order of a nonlinear controlsystem. The DAE index r is given by the minimum of the tworelative orders: rmin(qA ,qB)1, because boundary variable zappears in the derivative of constraint function g, once the deriva-tive of either output, yA or yB , begins to contain boundary vari-able z in it. Taking one more derivative of the constraint functiong yields an expression that can be solved for z . Note that relativeorder qA and qB are properties of the individual subsystems andtherefore they do not vary although they are connected to differentsubsystems. The DAE index r, on the other hand, varies depend-ing on which subsystems are connected to each other. However,no matter which subsystem is connected to one subsystem with a

    relative order of q, the DAE index is no larger than q1. There-fore, the simulator of each subsystem needs to have only q deriva-tives of its output. In developing co-simulation environment, eachsubsystem simulator should have as many output derivatives as itsrelative order.

    2 Eliminating the Computation of the k Term. The term represents the influence of x and t upon sliding variable swhen they vary from time-step k to k1 while z is kept constant.Therefore,

    s xk1 ,tk1 ,z ks xk ,tk ,z kkhigher order terms(38)

    where s(xk1 ,tk1 ,z k) represents the sliding variable evaluated at(xk1 ,tk1 ,z k). Combining Eq. 38 with the original Taylor ex-pansion in Eq. 18 yields,

    s k1s xk1 ,tk1 ,z kJz kkRk (39)

    where Rk represents the second-order remnant given by:

    Rk1

    2

    2s

    z 2

    z kk

    k2

    (40)

    Use of Eq. 39 eliminates the need for computing k , since the

    following nominal control that brings s k1 to zero in Eq. 39does not contain k :

    O kJz k1s xk1 ,tk1 ,z k (41)

    Note that this nominal control comprises Jz k ands(xk1 ,tk1 ,z k), both of which can be computed based on thesubsystem simulator outputs and their relevant derivatives by us-ing the following procedure:

    A. Let the subsystem simulators compute xk1 for z k given byBCC; Eq. 12.

    B. Let the subsystem simulators compute outputs y k1(0 )

    ,

    y k1(1 ) , . . . , y k1

    (r2) , y k1(r1) and y (r1) (xk1 ,tk1 ,z k), and re-

    turn them to the BCC.C. Based on Step B let the BCC compute s(xk1 ,tk1 ,z k) by

    using Eq. 36 and Jz k by Eq. 37.To stabilize the BCC, we need to limit the nominal control byusing the following actual control with a saturation function:

    k0 satJz k1s xk1 ,tk1 ,z k/0 (42)

    This new control law requires a new threshold value to be deter-mined, and a thorough proof is needed once again to guaranteeconvergence and stability. We will show the proof after makingone more improvement, Multi-Rate computation, in the followingsection.

    5 Multi-Rate DTSM

    5.1 Error Dynamics. It has been shown in Section 3 thatthe DTSM controller can stably bring the sliding variable s to zerowithin an error bounded by supDR. As the sliding variable tendsto converge, the errors in the algebraic constraint, g(x,t), and itsrelevant derivatives tend to converge as well. The dynamic rela-tionship between the sliding variable, s(x,z ,t), and the constraintfunction, g (x,t), is governed by Eq. 9 and is illustrated by theblock diagram in Fig. 4. The sliding variable s is thus filtered by(r1) consecutive low-pass filters having ( r1) repeated polesat 1/ resulting in the error in constraint equation, g(x,t), asshown in the figure. It is clear from this block diagram that themagnitude of the output of each block does not exceed the mag-nitude of the input to that block propagated from the original inputs. Therefore the following proposition holds:

    Proposition If the sliding variable is bounded s supDR,the error in constraint g, also has the same upper bound:

    g

    supDR.

    Fig. 4 Dynamic relationship between sliding variable s andconstraint function g

    6 Vol. 126, MARCH 2004 Transactions of the ASME

  • 7/31/2019 Jds 000001

    7/13

    The smaller the parameter , the faster g(x,t) converges. Asmall is therefore desirable to some extent, but making toosmall may incur an ill-conditioned problem. Examine Jz , the sen-sitivity of s to z,

    Jzj

    bjr1

    y r1

    z(43)

    Remember that by definition of the index, r, the sensitivity Jz

    depends only on the ( r1)st order derivative of output y, and is

    proportional to r1. Therefore, as decreases, Jz decreases ( r1) times faster than . As a result, the nominal control O k andthe lower bound on threshold 0 , both of which include the re-ciprocal of Jz, tend to increase rapidly, creating a large errorbound, supDR.

    To alleviate this problem of diminishing sensitivity Jz leadingto increasing error bound supDR, the computation must be per-formed with a reduced time step t; otherwise the error boundincreases and the system may become unstable. Use of a smalltime step t is costly, especially when the dimension of x is large.Such a small time step, however, is not needed for the computa-tion of the individual subsystems. It is not efficient to reduce thestep size for the entire system merely for the stability and errorbound of the sliding controller. Although the smaller step size willimprove accuracy in general, the extra accuracy gained from thesubsystem simulators for computation of the state, x, will not besignificant. Thus it is of great interest to decouple the overall stepsize of the Co-Simulation from the stability requirement of thesliding mode controller. Namely, we can compute the BCC using

    a time step smaller than the integration step size of the subsystemsimulators. This leads to the algorithm of multi-rate simulation.

    5.2 The Multi-Rate Control Law. To implement the multi-rate simulation, we divide one step of the BCC computation into nsmall steps. Let the integration of the subsystem simulators be inthe time scale k, and the BCC computation in the time scale iwhich is n times faster than the time scale k.

    Figure 5 shows how the multi-rate computation proceeds in theextended state space. The horizontal axis represents the aggre-gated state variables and time, x and t, while the vertical axisrepresents boundary variable z. In the Single-Rate method, com-putation proceeds from ( xk ,tk ,z k) t o (xk1 ,tk1 ,z k1) in one

    cycle, where the sliding variable is kept within the invariant set atboth points in the x-t-z plane. In the Multi-Rate DTSM, the firststep is to shift to point (xk1 ,tk1 ,z k) following the procedure ofcomputing s(xk1 ,tk1 ,z k) described in Section 4.2. The secondstep is to move vertically towards s(xk1 ,tk1 ,z k1). This step isnow divided into n small steps z k1/n , z k2/n , . . . ,z kn/n , wheresubscript i/n means the i-th step of n fast rate computations.

    Modifying the previous control law, Eq. 41, in accordancewith this subdivision yields the new nominal control:

    O xk1 ,tk1 ,z ki/njz xk1 ,tk1 ,z ki/n1

    s xk1 ,tk1 ,z ki/n (44)

    and the control input with a saturation function:

    ki/n0 sat O xk1 ,tk1 ,z ki/n/0 (45)

    The threshold value 0 must be determined to guarantee conver-gence to the invariant set.

    In finding the new threshold, an important condition for provingconvergence is to guarantee:

    s xk1 ,tk1 ,z k1 s xk ,tk ,z k (46)

    In each small step i/n we can guarantee:

    s xk1 ,tk1 ,z ki1/ns xk1 ,tk1 ,z ki/n (47)

    in the same way as the previous proof for the Single-Rate DTSM.

    However, this is not sufficient to satisfy Eq. 46, since the slidingmode control in the fast time scale starts from s(xk1 ,tk1 ,z k)and not from s(xk ,tk ,z k), as illustrated in Fig. 5. Therefore, bydefining

    s xk1 ,tk1 ,z ks xk ,tk ,z k (48)

    we must satisfy

    s xk1 ,tk1 ,z k1 s xk1 ,tk1 ,z k (49)

    through n small step computations.To this end, consider the following threshold value:

    0supD

    Jz1 supD

    nsup

    D

    R02

    n (50)Comparison of Eq. 50 with Eq. 23 shows that the upper boundof k is now replaced by the term. In the previous threshold

    given by Eq. 23, both and R decrease as time step t getssmaller. Likewise, the term and R become smaller as the num-ber of small time steps, n, increases. In other words, increasing nbrings about the same effect as decreasing t. Therefore, updat-ing the discrete-time sliding control at a faster rate than the sub-system simulators may guarantee convergence to the sliding mani-fold given by Eq. 24. The multi-rate approach simply runs theBoundary Condition Coordinator at a faster rate without changingthe step size of other subsystem simulators. This is one of themajor advantages over the single-rate approach.

    The above argument of multi-rate computation is related to thatof stiff problems. A number of effective methods have been de-veloped for solving stiff problems in the DAE community 16.

    One critical difference between those existing methods and ourmulti-rate DTSM is that ours does not need the internal states andmodels of interacting subsystems, while the existing methodsneed explicit information about them.

    5.3 Convergence Conditions for the Multi-Rate DTSMReplacing the single rate control law in Section 3 by the one given

    in Eqs. 44 and 45, we can show that supDR(02) is an invari-

    ant set:

    ssupD

    R02. (51)

    Fig. 5 Multi-Rate DTSM

    Journal of Dynamic Systems, Measurement, and Control MARCH 2004, Vol. 126 7

  • 7/31/2019 Jds 000001

    8/13

    Convergence proof can be made in the same way as the single rateDTSM. When the nominal control is larger than the thresholdvalue:

    O xk1 ,tk1 ,z ki/n0 (52)

    the control input is:

    ki/n0Jz xk1 ,tk1 ,z ki/n1s xk1 ,tk1 ,z ki/n

    O xk1 ,tk1 ,z ki/n1 (53)

    The sliding variable s at step (xk1 ,tk1 ,z ki1/n) is then broughtto:

    s xk1 ,z k1 ,z ki1n s xk1 ,tk1 ,z ki/n 10O xk1 ,tk1 ,z ki/n

    1

    R z ki/n2 (54)

    Taking absolute values and considering the initial condition, wehave:

    s xk1 ,tk1 ,z ki1n s xk1 ,tk1 ,z k1/n 10O xk1 ,tk1 ,z ki/n

    1

    R

    z ki/n2

    s xk1 ,tk1 ,z ki/n0

    s xk1 ,tk1 ,z ki/n

    O xk1 ,tk1 ,z ki/n 1

    R z ki/n2 (55)

    Substituting the equivalent control and the threshold value again,we arrive at:

    s xk1 ,tk1 ,z ki1n s xk1 ,tk1 ,z ki/n 0Jz xk1 ,tk1 ,z ki/nR z ki/n

    2

    n

    n0 (56)

    Therefore the sliding variable s(xk1 ,tk1 ,z ki/n) keeps de-creasing monotonically as long as the nominal control is largerthan 0 . When the nominal control is smaller than the thresholdvalue, the sliding variable enters the invariant set in one step:

    s xk1 ,tk1 ,z ki1/nR z ki/nsupR02 (57)

    The above equations show that the invariant set Eq. 51 is attrac-tive and the convergence rate is larger than ()/n per step inthe fast time scale.

    Note that the threshold 0 that satisfies Eq. 50 exists for a

    sufficiently large n if supJz1 and sup involved in the righthand side of the inequality are finite in domain D and the second

    order partial derivative 2s/z 2 in the remnant R

    k is boundedwithin domain D. Since Rk is proportional to the square of 0 asgiven by Eq. 40, the inequality 50 is always satisfied by asmall positive 0 and a large n. By using such a value of0 , BCCcan bring the trajectory into the invariant set in at most n steps.

    6 Numerical Examples and Implementation

    6.1 Numerical Examples. The major theoretical results onconvergence obtained in the previous sections will be verifiedusing numerical examples. Consider the following subsystemssubject to an algebraic constraint.Subsystem A:

    xA,1xA ,2

    xA ,2xA ,1xA ,2xA ,3f ctz

    xA ,3xA ,1xA,2 (58)

    where f ct(z) is a nonlinear function of z to be defined later.Subsystem B:

    xB,1xB ,2

    xB ,2xB ,12xB,2xB ,3z

    xB ,3xB,1xB ,2z (59)

    Algebraic constraint

    gx ,txA ,1xB ,10 (60)

    This is an index 3 DAE. The subsystems are simulated usingEulers forward method with t0.1, and the algebraic constraintis enforced by the Multi-Rate DTSM Eqs. 44 and 45. Althoughthis example is rather simple, instability occurs for parameter val-ues not satisfying the convergence conditions. The following three

    cases are typical failure scenarios that we often encounter in se-lecting the parameter values.

    a. A Small Incurs Instability As discussed in Section 5, theerror in algebraic constraint g decreases more quickly, as para-meter becomes smaller. However, the sensitivity of s withrespect to z, i.e., Jz, becomes smaller as reduces. As a result, thethreshold 0 must be large enough to compensate for the effects ofx and t on s, otherwise instability is incurred. Figure 6 illustratesthis phenomenon.

    In Fig. 6, tan (z) was used for the nonlinear function f ct(z) ,and the initial values of xA ,1 , xA ,2 , and xA ,3 were set 1 while

    xB ,1 , xB ,2 , and xB ,3 started from 1. Setting the initial value of zto 1 yields an inconsistent initial condition: s0. First the Single-Rate DTSM, i.e. n1, was used for the computation in Fig. 6.Note that, as was reduced from 0.5 to 0.1, the sliding variable s

    diverged even for the relatively large threshold value, 01, sincethe stability condition Eq. 23 was not satisfied for the small .The Multi-Rate DTSM solves this problem. Repeating the z com-putation n times yields the effect that is equivalent to increasingthe threshold value n times: n0 . Figure 6 shows that n10provides a stable result even for 0.1. We can use a small 0that can still satisfy the stability condition, Eq. 50. A smaller yields a smaller error in the algebraic constraint g, as shown inFig. 7. Figure 8 shows the behavior of the boundary variable z inthese three cases.

    b. When an Excessively Large Input is Allowed, InstabilityOccurs The convergence condition requires the existence of 0

    Fig. 6 Time responses of sliding variable s for different valuesof

    8 Vol. 126, MARCH 2004 Transactions of the ASME

  • 7/31/2019 Jds 000001

    9/13

    and t that satisfy Eq. 50 or Eq. 23. The remnant R(02) in-

    volved in the right hand side of Eq. 50 is a function of0

    and isnonzero when the sliding variable s is a nonlinear function ofboundary variable z. As 0 increases, the remnant term sharplyincreases if the nonlinearity is significant. As a result, an evenlarger threshold 0 may be required for satisfying Eq. 50, whichmay lead to divergence of the remnant. In consequence, the sys-tem may become unstable. Figures 911 illustrate this phenom-enon.

    In Figs. 911, a tan (z) was used for the nonlinear functionf ct(z), and the same initial conditions were used as in the aboveexample except for z. The initial value of z was set to 10, whichyielded an inconsistent initial condition s0. Parameter wasfixed to 0.5. When 0 was set to 20, i.e. a very large thresholdvalue, the response of z diverged as shown in Fig. 9, and thesliding variable s did not converge as shown in Fig. 10. When 0was reduced to 1, the magnitude of R(0

    2) significantly reduced,

    creating stable responses as shown in both figures. Note that n wasset to 20 for 01 and n1 for 020, so that the product n0 remained the same for both cases. From these results we findthat, when the threshold 0 is too large and the nonlinearity be-

    comes significant, the change in z tends to be too large to regulate,although the magnitude of the control limit is extended to a larger

    threshold value, e.g., 20. The plot of Jz1R shown in Fig. 11

    verifies this stability condition; the absolute value of Jz

    1R

    ismuch larger than 020, hence the stability condition Eq. 50 isnot satisfied and the remnant term tends to be erratic.

    c. When n0 is too Small, Divergence Occurs. In the co-simulation computation, the dynamics of x and t acts as exog-enous disturbances to the problem of regulating the sliding vari-able s. In Multi-Rate DTSM, n0 must be large enough tocounteract the effects of x and t in order to stabilize the slidingvariable s. When n0 is too small, the s dynamics diverges.Figures 12 and 13 illustrate this phenomenon. In these figures, allthe initial conditions and parameter values remain the same exceptthat different combinations of n and 0 were used. Figure 12shows that n010 is large enough to stabilize the s dynamics,while n03 is too small to stabilize. Figure 13 shows that theboundary variable z changes too slowly when n0 is small. Es-

    sentially, the sliding variable becomes negative around t1 sec-ond, but the boundary variable is still in the far negative regiontrying to correct the positive s in the past. This eventually leads toan unstable BCC.

    Fig. 7 Time responses of algebraic constraint g for differentvalues of . For 0.1, n1, the trajectory immediately di-verged; it became converging as n increased to 10.

    Fig. 8 Boundary variable under different

    Fig. 9 Time responses of boundary variable z for different val-ues of 0

    Fig. 10 Time responses of sliding variable for different valuesof threshold 0

    Journal of Dynamic Systems, Measurement, and Control MARCH 2004, Vol. 126 9

  • 7/31/2019 Jds 000001

    10/13

    6.2 Implementation. Since the objective of Co-Simulationis to facilitate communications among engineers in different com-panies and organizations over the Internet, we need to implementthis software on a distributed network environment. The sub-system simulators and the BCC need to be independent softwaremodules that communicate through network connections to ex-change inputs and outputs. Each module would update its stateand provide its outputs after it has received all the necessaryinputs.

    Any explicit integration algorithm can be used in each of themodules to update its state. It is thus the responsibility of theowner/developer of each subsystem simulator to compute its dy-namic equations stably and accurately and to return the outputvalues requested by the BCC. The system integrator, whose func-tion is to simulate the coupled dynamics of multiple subsystems,has the tasks of defining all the boundary conditions and tuningthe parameters, , n, and o , such that the convergence condi-tions are satisfied and that the error bound requirement is met. Aslong as the subsystem simulators are stable and those conditions

    assumed in the previous sections for stability and error boundanalysis are held, these parameters can be tuned based on Eq. 50and the guidelines demonstrated in the numerical examples. Oneproblem, however, is that parameter may affect the numerical

    stability of individual subsystem simulators 17 has analyzed therelationship between the value of parameter and the stability ofsubsystem simulators, and developed a procedure for finding asuitable parameter to avoid instability in the subsystem simula-tors. The procedure is straightforward for a class of subsystemswhile complex nonlinear subsystems having diverse time scalesneed more steps to find an appropriate value for parameter . Formore details, see 17.

    Since Co-Simulation is a distributed computation environment,we wish to minimize the amount of communication among simu-lation modules. When the Multi-Rate DTSM algorithm is used inBCC, we have to evaluate s and Jz multiple times during one timestep of subsystem computation. Since both s and Jz depends on

    y (r1) (z), computation of s and Jz can be performed solely on the

    BCC side if the subsystem simulators can provide the ( r1)st

    derivative as a function of z, i.e. y ( r1) (z), in lieu of numerical

    values for y (r1) . It should be noted that the subsystem simulator

    does not have to disclose the complete function y (r1) (x,t,z) withvariables x, t, and z as a whole. It is required to provide only thefunction evaluated at the current state and time, xk and tk . Thestate variables and their values stay in the subsystem simulator,and the BCC would not know the state variables and their rela-

    tionship with the output derivative y (r1) . It should be noted thateach subsystem simulator must contain codes for derivatives ofthe output up to the q -th order, that is, the relative order of thesubsystem in relating y to boundary variable z. As mentioned inSection 4, although DAE index r varies depending on the combi-nation of subsystems, it does not exceed the relative order of eachsubsystem plus 1. Therefore, the necessary derivatives are at mostthe q-th order.

    The Co-Simulation software environment has been developedusing JAVA and will be reported in a separate paper. Here, wesimply list pseudo codes to illustrate the core of the Co-Simulation environment used for solving DAEs resulting fromconflicting sub-simulators. Table 1 recapitulates the computationsperformed at the BCC the left column and the ones at eachsubsystem simulator the right column as well as communica-tions in between. Starting at point A in the table, the BCC suppliesthe boundary variable z k to the connected subsystem simulators.Using z k and stored xks, each subsystem simulator updates itsinternal state for one step, calculates outputs and their derivatives

    y k1 , y k1 . . . y k1(r2) , and sends these values to the BCC to-

    gether with the z-function y (r1) (z) evaluated at xk and tk . Hav-

    Fig. 11 Jz1R for an unstable case

    Fig. 12 Time responses of sliding variable for different valuesof product n"0

    Fig. 13 Time responses of boundary variable for different val-ues of product n"0 . a Only one subsystem providing an ef-fort variable, b No subsystem providing an effort variable, cMultiple subsystems providing effort variables.

    10 Vol. 126, MARCH 2004 Transactions of the ASME

  • 7/31/2019 Jds 000001

    11/13

    ing received these data, the BCC calculates a temporary variable,before proceeding to the fast time scale computation. This tempo-rary variable is a part of the sliding variable s, but is not affectedby z. The purpose is to avoid unnecessary repetition of computa-tion. Starting from point B, the boundary variable z is updated nconsecutive times. This is to evaluate the control law given byEqs. 44 and 45 in Section 5. After the BCC completes n steps

    of computation for z, the computational thread goes back to pointA and the BCC sends the newly updated z to connected simula-tors. This completes one time step of computation.

    7 Extension to More Than Two Subsystems

    So far the DTSM algorithm has been developed only for twointeracting subsystems having incompatible boundary conditions.In this section, the DTSM method is extended to general multiplesubsystems, the boundary conditions of which are described by aclass of energetic junction conditions. Furthermore, we intend toprovide physical foundations underpinning the assumptions andformulations we have made for boundary conditions in the previ-ous sections. The treatment and notation of boundary conditionswe will use in this section are analogous to those of bond graph

    18. We assume that all the subsystem models are connectedthrough energetic junctions governed by generalized KirchhoffsLaws as used in bond graph 18.

    In bond graph, there are two types of energetic junctions de-scribing connections among elements and subsystems. Let ej andfj , respectively, be effort and flow variables associated with thepower bond of the j-th element or subsystem. The two types ofconnections are described by the following junction equations:

    ej

    m

    fj0 (61)

    and

    fj

    m

    ej0 (62)

    where m is the number of elements and subsystems connected tothe same junction. Equation 61 describes a common effort junc-tion, of which the effort variables, such as force, pressure, voltage,

    etc., are the same for all the m elements. The flow variables asso-ciated with the common effort junction must sum to zero, i.e. thegeneralized Kirchhoff voltage law. Equation 62 defines a com-mon flow junction, which is symmetric to the common effort junc-tion, representing the generalized Kirchhoff current law. We usethe common effort junction in the following discussion since thecommon flow junction can be treated in the same manner.

    It is convenient to use the bond graph notation to represent suchenergetic junctions and causal relations among the connected sub-systems 18. Figure 14 shows three types of common effort junc-tion with m connected subsystems. A short bar attached to eitherside of power bond is called a causal stroke, indicating whichvariable, effort or flow variable, is determined by the connectedsubsystem. Figure 14a shows that one and only one subsystemprovides an effort to the junction, which is transmitted to other

    (m

    1) subsystems connected to the junction. In this case, theinteracting subsystems are free of causal conflict. Subsystem 1provides an effort to the junction and the junction passes the effortto all other connected subsystems. This is a generalization of theexample shown in Fig. 1.

    If no subsystem determines the effort variable at the commoneffort junction, as in the case of Fig. 14b, we have to use thegeneralized Kirchhoff current law for the flow variables in orderto find out the common effort variable e:

    gj

    m

    fj0 (63)

    Table 1 Computations at and communications between subsystem simulators and BCC.

    Journal of Dynamic Systems, Measurement, and Control MARCH 2004, Vol. 126 11

  • 7/31/2019 Jds 000001

    12/13

    Note that this constraint equation does not explicitly include theeffort variable e but each flow variable fi and the effort variableare related via the dynamic equation of the individual subsystem.Differentiating the above constraint Eq. 63 for ( r1) timesyields an explicit relation between fi and e. Regarding the com-mon effort variable e as a boundary variable z we find that thesystem is a high index DAE. In section 2, we assumed that thealgebraic constraint is a linear function of output variables. How-ever, that assumption turns out to be the consequence of the as-sumption that subsystems are interacting through energetic junc-tions. This assumption has a profound physical sense and appliesto a broad class of physical systems. If this constraint is the onlyinteraction among the subsystems under consideration, it is a sca-lar case of causal conflict, and the same solution method devel-oped in the previous sections applies to this class of systems.

    On the other hand, if multiple subsystems provide effort vari-ables as outputs to the common effort junction, a different type ofcausal conflict is incurred, as shown in Fig. 14c. In this case, we

    have to solve for p flow variables (p1) to supply to subsystem1 through p so that all p subsystems provide the same effort outputsatisfying the continuity condition of the common effort junction.We have p unknown flow variables which can be treated asboundary variables z 1 ,z 2 , . . . ,zp . Also we can obtain (p1)independent equations from the continuity condition:

    e 1e 2ep (64)

    This can be extended to (p1) independent equations, such as

    g1eje 10

    ]

    gj1ejej10(65)

    gjejej10

    ]

    gp1eje ,p0

    where ej can be any one of the p subsystems. Also, we have oneequation from the generalized Kirchhoff current law:

    gpj

    1

    m

    fji

    1

    p

    z i j

    p

    1

    m

    fj0 (66)

    where the boundary variables are z 1f1 , z 2f2 , . . . ,zpfp .Therefore, we have p equations and p unknown boundary vari-ables zs, and hence the junction is solvable. This is a vector caseof causal conflict. The index number of the algebraic constraintderived from Kirchhoffs law is one, while the indices of the otherconstraints, i.e. Eq. 65, depend on the p subsystems.

    For this type of vectorial causal conflict, the DTSM algorithmmust be extended. First the sliding manifold must be extended tovectorial expressions using a vector sliding-variable:

    s s 1 ,s 2 , . . . , sj , . . . , spT (67)

    Each component of vector s is defined in accordance with Eq. 9with its own index number. To enforce the sliding manifold theDTSM must be modified to a multi-input, multi-output control.

    The Jacobian Jz becomes a matrix quantity in the MIMO control,and a matrix norm must be used for convergence conditions. If thealgebraic constraints are in the form of Eqs. 65, the Jacobianwill be sparse and easily inverted. See 17 for details.

    A special case of the vector causal conflict is the refrigerationcycle discussed in Section 2. In that system, the two pipes comingfrom the two evaporators merge and are connected to the pipe tothe accumulator. The mass flow rates of the three pipes sum tozero the flow to the accumulator is negative, and the pressure iscommon to the three pipes. These conditions are described byEqs. 1 and 3, which correspond to Eqs. 66 and 65, respec-tively. Although two algebraic constraints are present, they can bereduced to one. Since the constraint due to Kirchhoffs law isindex one, the boundary variables are explicitly involved in theconstraint equation. Therefore the constraint can explicitly be

    solved for one boundary variable and thereby eliminated algebra-ically. In Eq. 2, the mass flow rate of subsystem B was explicitlysolved and was represented by the boundary variable of sub-system A. Therefore, the system has only one boundary variableand one algebraic constraint. In general, the number of boundaryvariables and algebraic constraints can be reduced by at least 1,since generalized Kirchhoffs laws provide index one linear alge-braic constraints that can be solved explicitly.

    8 Conclusion

    A method for co-simulating coupled subsystems is presented inthis paper. The importance of this method is that it allows us tosimulate coupled dynamics without revealing the internal stateand model of each simulator. The major technical contributions of

    this paper are:

    1. A Discrete-Time Sliding Mode Control method has been de-veloped to compute high-index Differential-Algebraic equa-tions resulting from the integration of subsystem simulatorshaving incompatible boundary conditions.

    2. A computational algorithm has been developed to executeco-simulation by merely exchanging input and output datawith individual subsystem simulators, thereby preservingproprietary information.

    3. A multi-rate co-simulation method has been developed toimprove computational efficiency, constraint error, andstability.

    Fig. 14 Three types of common effort junction

    12 Vol. 126, MARCH 2004 Transactions of the ASME

  • 7/31/2019 Jds 000001

    13/13

    4. Convergence conditions and constraint error bounds for theabove co-simulation methods have been obtained, and nu-merical examples have verified the theoretical results.

    5. The physical foundations underpinning the major assump-tion and the resultant algorithm have been provided. Thelinearity of constraint equations, the key assumption used fordeveloping the algorithm, stems from the generalized Kirch-hoff Laws.

    This paper has addressed, for the first time, how multiple sub-system simulators can be combined without disclosure of internalstates and models of the individual subsystems. This opens up a

    new research area of dealing with proprietary information in nu-merical analysis. A number of challenging issues emerge, whichneed further investigation for the future. For example, a betterprotection mechanism will be needed to prevent a malicious userfrom stealing proprietary simulator information. The co-simulation method presented in this paper does not require theinternal state and model information, but it does not guarantee thatthe proprietary information be protected. Another critical issue isto facilitate the tuning of co-simulation parameters so that, despitelimited access to proprietary subsystem information, people canperform co-simulation stably and efficiently without specialknowledge. Tuning is more complicated as we deal with moreheterogeneous subsystems having diverse time scale and granular-ity. In this paper, the basic convergence conditions and guidelinesfor selecting the parameters have been obtained, assuming that the

    individual subsystem simulators are stable. Selection of step sizes,however, is a difficult task, when interacting subsystem simulatorsdiffer significantly in time scale, accuracy, and stability margin.More powerful methods for selecting parameter values, or adap-tive methods, may be needed for the future.

    References

    1 Gu, B., Gordon, B. W., and Asada, H. H., 2000, Co-Simulation of CoupledDynamic Subsystems: A Differential-Algebraic Approach Using Singularly

    Perturbed Sliding Manifolds, Proceedings of the American Control Confer-

    ence, pp. 757761, June 2830, 2000, Chicago, IL, 2000.2 Wallace, D. R., Abrahamson, S., Senin, N., and Sferro, P., 2000, Integrated

    Design in a Service Marketplace, Comput.-Aided Des., Vol. 32, no 2, pp.

    97107.

    3 Brenan, K., Campbell, S., and Petzold, L., 1989 and 1996, Numerical Solu-

    tion of Initial Value Problems in Differential-Algebraic Equations, Amster-

    dam, North-Holland, 1989, also in SIAM Classics in Applied Mathematics,

    Siam, Philadelphia, 1996.

    4 Haier, E., and Wanner, G., 1991, Solving Ordinary Differential Equations II:Stiff and Differential-Algebraic Problems, Springer-Verlag, New York.

    5 Cellier, F. E., and Elmqvist, H., 1993, Automated Formula ManipulationSupports Object-Oriented Continuous-System Modeling, IEEE Control Syst.,

    Vol. 13, No. 2.

    6 Andersson, M., 1990, An Object-Oriented Language for Model Representa-tion, Licenciate thesis TFRT-3208, Department of Automatic Control, Lund

    Institute of Technology, Lund, Sweden.

    7 Mattsson, S. E., Elmqvist, H., and Otter, M., 1998, Physical System Model-ing with Modelica, Control Eng. Pract., Vol. 6, pp. 501510.

    8 Sinha, R., Paredis, C. J. J., Liang, V.-C., and Khosla, P. K., 2001, Modeling

    and Simulation Methods for Design of Engineering Systems, ASME J. Com-

    put. Inf. Sci. Eng., Vol. 1, pp. 8491.

    9 Gordon, B. W., and Asada, H. H., 2000, Modeling, Realization, and Simula-tion of Thermo-Fluid Systems Using Singularly Perturbed Sliding Manifolds,

    ASME J. Dyn. Syst., Meas., Control, 122, pp. 699707.

    10 Baumgarte, J., 1972, Stabilization of Constraints and Integrals of Motion inDynamical Systems, Comp. Math. Appl. Mech. Eng., Vol. 1, pp. 116.

    11 Mattsson, S. E., and Soderlind, G., 1993, Index Reduction in Differential-Algebraic Equations Using Dummy Derivatives, SIAM J. Comput., Vol. 14,

    No 3, pp. 667692.

    12 Gordon, B. W., Liu, S., and Asada, H. H., 2000, Realization of High IndexDifferential Algebraic Systems Using Singularly Perturbed Sliding Mani-

    folds, Proceedings of the 2000 American Control Conference, pp. 752756,

    Chicago, June 2000.

    13 Gu, B. and Asada, H. H., 2001, Co-Simulation of Algebraically Coupled

    Dynamic Sub-Systems, Proceedings of 2001 American Control Conference,pp. 22732278, Arlington, VA, 2001.

    14 Drakunov, S. V., and Utkin, V. I., 1992, Sliding Mode Control in DynamicSystems, Int. J. Control, Vol. 55, No. 4, pp. 10291037.

    15 Utkin, V., Guldner, J., and Shi, J., 1999, Sliding Mode Control in Electrome-chanical Systems, Taylor & Francis Inc.

    16 Ascher, U. M., and Petzold, L. R., 1998, Computer Methods for OrdinaryDifferential Equations and Differential-Algebraic Equations, Society for In-

    dustrial and Applied Mathematics, Philadelphia.

    17 Gu, B., 2001, Co-Simulation of Algebraically Coupled Dynamic Sub-systems, Ph.D. Thesis, Department of Mechanical Engineering, Massachu-

    setts Institute of Technology.

    18 Karnopp, D., Margolis, D. and Rosenberg, R., 1990, System Dynamics: AUnified Approach, Wiley-Interscience, New York, NY.

    Journal of Dynamic Systems, Measurement, and Control MARCH 2004, Vol. 126 13