Upload
trinhthien
View
227
Download
0
Embed Size (px)
Citation preview
Simulation of FDDI-based Distributed Energy Management Systems
Tetiana Lo, Felix F. Wu, Shau-Ming Luo
Department of Electrical Engineering and Computer Sciences University of California, Berkeley
Berkeley, California, USA
Abstract
We have developed a simulator to evaluate the instantaneous performance of FDDI-based Energy Management Systems(EMS) and show by example the ease of modifying operating parameters of the network configuration, and constructing various distributed EMS design alternatives. Hence, we demonstrate the flexibility of our simulation approach and FDDI's enormous potential as a backbone network for an energy management system.
Keywords : SCADA/EMS hardware implementation and design, system architecture, networks, open systems, local area network performance evaluation, object-oriented simulation
1. Introduction
Traditional EMS design is basically centralized: the configuration consists of centrally located processors that receive data from the entire power system. Recent advances in computer and communications technology have increased the number of options available to EMS designers. Microprocessors are capable of operating at increasingly higher rates, and computer networking has become more sophisticated, supporting a much wider range of applications. Together, these advances have resulted in higher-level performance and lower implementation costs. It is now technologically and economically feasible to distribute the processing load of an EMS among several processors, leading to the concept of a Distributed EMS[l].
A distributed EMS may possess several possible configurations. Currently, all proposed distributed EMS designs are local area network(LAN)-based (i.e., [2],[3]). A prototype of a functionally distributed system was proposed and implemented to validate EMS performance in [ 4]. EMS functions were executed in separate workstations connected by a dual LAN. This approach allowed the evaluation of processing performance and the verification of system operation. However, prototyping has its limitations; it is difficult to consider many EMS configuration options in advance and implement future configurations to which the system could evolve. Using simulation methods is the most practical and effective way of evaluating EMS performance and comparing design options. In [5], a method for simulating a generic LAN-based EMS for a typical operating scenario in a control center was presented. The simulation results
865
showed how the average performance of the distributed system could be predicted. However, while an EMS may have acceptable average performance, it still may fail to meet the real-time response requirements. These examples demonstrate the need for a new approach to EMS performance evaluation and design.
At UC Berkeley an integrated planning and analysis environment for communication networks, Netplan, has been developed, based on object-oriented technology[6]. A LAN simulator, LANSIM, has been developed as one of the tools in Netplan. In this study we use LANSIM to examine a type of distributed EMS based on FDDI, a high-speed LAN. Specifically, we evaluate the instantaneous performance of this particular type of distributed EMS under heavily-loaded conditions, to observe if the EMS is still capable of satisfying its real-time requirements.
Using our simulator, LANSIM, we show by example the ease of modifying operating parameters of the network configuration, and constructing various distributed EMS design alternatives. Hence, we demonstrate the flexibility of our simulation approach and FDDI's enormous potential as a backbone network for an energy management system.
This paper is an extension of our previous work in the simulation of LAN-based distributed EMSs[7]. Whereas the objective of the previous paper was to compare EMS performance based on Ethernet and FDDI, the focus of the present paper is on the potential provided by FDDI to further improve EMS performance by tuning various network parameters.
The paper is organized as follows. In Section 2 we introduce key characteristics of the FDDI network. The LAN and workstation models we developed using LANSIM are proposed in Section 3. In Section 4 we present the simulation cases and discuss the results. We conclude in Section 5 and propose future directions.
2. LAN Simulator
2.1 Ptolemy
Our LAN simulator, LANSIM, is based on the Ptolemy software environment, an object-oriented system developed at UC Berkeley[8] . Programmed in C++
and running on a UNIX workstation, Ptolemy is a very flexible framework, supporting heterogeneous system specification, simulation, and design. Each model of computation is called a domain and consists of an extensible library of functional blocks. The basic unit of computation in Ptolemy is the block, represented graphically by an icon with terminals, corresponding to its portholes. A block may be atomic (star) or composite(galaxy). Applications are constructed graphically by connecting blocks. At runtime, a scheduler determines the order in which the blocks are executed. Data is exchanged between the blocks in the form of particles; discrete units which may be of several types: integer, real, complex, or a general structure such as a data packet. Blocks may have states, user-settable data structures which may be monitored from one execution to another. Ptolemy's strength and uniqueness lie in its capability of supporting the many aspects of communication system modeling and simulation and the ease with which new application-specific design environments may be built.
2.2 DE Domain
Individual nodes of the FDDI network are modeled as stars in the discrete-event(DE) domain. DE stars function as event-processing units which receive and process particles from the outside and generate output events after a user-given latency. A ring interconnection of stars constitute the local area network. A data packet, defined as one type of particle, has an associated time-stamp generated by the block producing the particle and represents an event corresponding to a change of system state. The DE scheduler processes events in chronological order until the global time reaches a user-specified "stop time." The global event queue contains particles currently in the system, sorted by time stamp. The scheduler retrieves an event from the head of the queue, the earliest event, and sends it to an input porthole of its destination block. When a new event appears at the input portholes of a star, the scheduler retrieves and sends all other simultaneous events destined to the same star. The star is then executed(fired). After execution, events generated at the output portholes, i.e., packets to be transmitted to another station, are placed in the global event queue. The scheduler repeats this retrieving and firing process until the given terminating condition is met.
2.3 Station Model
A workstation model has been developed for the simulation of a distributed EMS, as shown in Figure 1. We model an individual FDDI station as possessing two layers: the traffic generator(upper) and the medium access control(MAC) layer(lower). Upon receiving a traffic generator protocol data unit(Pdu), the MAC layer appends header and control fields , and pads the resulting MAC Pdu, if necessary, to meet the minimum packet size requirement. The packet is then placed in the appropriate queue, a star obtained from the Ptolemy library which accumulates packets in a finite capacity FIFO queue and
866
produces outputs on demand, and awaits transmission onto the medium.
... ~.~~~ ....................... , ~ ~ ---~~~-.~-..... ~.-......... -....... -..... ,-..... ~ ... l ~ ~
I ealdldly
Figure l Station Model
The receiver and traffic generator are peer entities. The receiver accumulates and processes Pdus produced by all traffic generators and passes them to the report generators. The traffic generator in each station utilizes the adjoining traffic models to generate messages of desired lengths and distributions, uniform and exponential, for example. A message is represented by a data structure with several information fields including length, timestamp, source and destination addresses, and type of traffic.
The user may build a particular station, a generation processor(GEN), for example, by first selecting a LAN technology(MAC layer) from the Protocol Library, and various applications such as economic dispatch(ED) and automatic generation control(AGC) from the Application Library. Outputs may be chosen then for report generation or statistical information, such as delay calculations and distributions, from the Report Library. Finally, stations are interconnected to form a LAN.
A simulation case may be constructed based on the particular configuration of the LAN-based EMS we wish to simulate. This includes specifying the number of workstations, functions distributed in each station, and 'the communication protocol operating in the system. Individual workstation models may be built by selecting blocks from the class libraries. Each station model is defined as a composite block and represented as an icon in the LAN SIM user interface. These icons are connected graphically according to the system topology. The simulation case is then executed. Parameters such as execution frequency of a function and propagation delays in the
communication protocol may be easily entered or changed by using pop-up menus at runtime.
Alternative EMS configurations may be implemented by creating and connecting additional station models. For example, the communication protocol can be changed by replacing the original protocol block with an alternative block from the protocol library in each station model. The distribution of the EMS functions can also be varied: to study the effects of executing SE in a separate processor, an additional station is created, containing the the newly-defined SE traffic model. New EMS functions and communication technologies may be incorporated by developing models for them in the LANSIM class libraries.
In our model the transmission media and nodes are assumed error-free; packets are transmitted and arrive at destinations without error. Furthermore, it is assumed that no node failures or ring malfunctions occur, and the network operates at a fixed transmission rate of 100 Mbps.
3. FDDI Modeling
FDDI is a 100 Mbps local area network having an optical fiber dual ring topology and using a timed token rotation protocol[9]. The FDDI Standard complies with the OSI Reference Model, with the network providing services specified by the Data Link Layer and Physical Layer.
The MAC schedules and performs data transfers on the ring. A node gains access to the medium to begin transmission by capturing the token, a unique frame which is circulated sequentially. Immediately after completing transmission, the node issues and forwards a new token. Each node repeats the frames it receives to its downstream neighbor. If the destination address of a frame matches that of the MAC's, and no error is indicated, the frame is copied into a local buffer. The MAC modifies indicator symbols as the frame is repeated to indicate the detection of an error, address recognition, and the copying of the frame. Frames returning to the originating node are stripped by not being retransmitted.
Two modes of transmission are supported by FDDI: synchronous and asynchronous. Synchronous traffic utilizes a preallocated bandwidth and has a guaranteed maximum response time. This traffic may be transmitted by a node whenever it receives the token. Asynchronous traffic bandwidth is allocated dynamically from the remaining unused and unallocated bandwidth. Transmission is not allowed if the time since receiving the last token exceeds the operational target token rotation time (T_Opr), the expected token rotation time; a value negotiated during ring initialization. Eight asynchronous priority levels are provided within each node.
A single FDDI ring is modeled based on the ANSI
867
Standard for FDDI[IO]. Queues in each station distinguish between traffic received from the ring and that generated by the station itself, and separate synchronous and asynchronous data. The user assigns the address, synchronous bandwidth, T_Opr, and eight asynchronous priority thresholds(T _Pri) for each node. Initially, it is assumed that an arbitrarily chosen node possesses the token. The token is a fixed-length frame generated by the MAC layer of the station sending the token, uniquely identified by its control field bits. During the first token rotation, all node late count registers are set to I; only synchronous data may be transmitted. Thereafter, normal ring operation ensues.
MAC protocol
Several important parameters used in the timed token rotation protocol are defined below:
a. TTRT: Target-token rotation time; a station-dependent value; the expected time between successive receptions of the token.
b. T_Opr: Operating target token rotation time; the lowest of all station TTRT values is selected as T_Opr of the system.
c. lRT: Token rotation timer; each station has a lRT, reset by T_Opr.
d. ST: Synchronous bandwidth timer; preallocated bandwidth for synchronous frames. Initially, ST is assigned as a percentage ofT_Opr. The total synchronous bandwidth of all stations should not exceed T_Opr.
e. LCT: Late count register; within each station the register is incremented each time TRT expires, and cleared when the token is received.
f. THT: Token holding timer; if LCT is zero when the token arrives, the current value ofTRT is placed into THT. Asynchronous frames may be transmitted only under this condition after synchronous frames are served.
Upon receiving the token, the station carries out several processes:
1. Check LCT. If LCT is zero, the current TRT value is placed into THT, and lRT is reset by T_Opr. IfLCT is greater than zero, TRT is not reset.
2. Transmit synchronous frames until ST expires or no synchronous frames are available.
3. Transmit single priority asynchronous frames (if TRT was reset earlier) until THT expires or no asynchronous frames are available. For multi-priority asynchronous frames, serve the ones of higher priority first. THT is required to be greater than a threshold value T_Pri(n) before a frame of priority level n may be transmitted. If either THT or TRT expires while a station is transmitting,
the station completes the transmission of the current frame before forwarding the token.
It is important to note that 1RT is enabled during the transmission of both synchronous and asynchronous frames. However, THT is enabled only during the transmission of asynchronous frames. The FDDI MAC protocol is modeled as shown in the flow chart in Figure 2[11 ].
LCT=O
m;nrrab!ld t.ll)'llimc
l LCT>O
STapiul Cl'IDl)IL hmc
Figure 2 FDDI MAC Protocol Model
Packet transmission
Packet transmission is modeled as occurring in two phases; the transmission of a SOT event, followed later by the data packet. We assume that the transmitting station possesses the token, a data packet from the appropriate queue is available, and if the data is asynchronous, the threshold criterion is satisfied. The station transmits a SOT event onto the ring; the end-of-transmission timer (eotTimer) is set to the transmission time of the packet. Expiration of the eotTimer signals that transmission has been completed and triggers the station to transmit the
868
data packet to the neigh boring node, without resetting the eotTimer.
When a station receives a SOT event, the event is passed to the downstream station. Upon receiving a data packet, the station examines the destination address, stripping the packet and placing it in the appropriate queue if a match is found, or forwarding the packet otherwise.
Ring states
Under light traffic loading conditions, stations may not have packets available for transmission when the token arrives. As a result, upon arrival, the -token is forwarded immediately; the token traverses the ring continuously. To decrease event-processing time and improve simulation efficiency while maintaining ring functionality, two ring states are defined and implemented: normal and deterministic. In the normal state, the token is circulated sequentially by the stations, as defined in the FDDI protocol. Since we may determine the time between successive arrivals of the token at any node, and hence, the time at which a node will receive the token, the token passing process may be eliminated when there are no packet transmissions on the ring for one token rotation time. During this period the ring is said to be in the deterministic state. The token-passing process resumes, and the ring returns to the normal state when any station receives a self-generated packet for transmission.
4. Distributed EMS
4.1 System Configuration
In this study, we investigate the FDDI LAN technol- . ogy in terms of its ability to satisfy EMS performance requirements. Our base configuration, shown in Figure 3,
~-·······~
\I FF.P
Console Conaole Console
Figure 3 Base Configuration
is basically the same as that used in [5], except 5 dis-
patcher consoles are assumed here. The front-end processor(FEP) collects data from the remote terminal units(RTUs). AGC and ED functions run in .the GEN processor. Security analysis functions, including state estimation, are performed in the NET processor. The DB processor contains the shared database management system. The Consoles are MMI processors.
The configurations studied are generic models for a distributed EMS, and are not intended to represent any specific supplier's configuration. Other configurations, such as those using additional LANs and alternate highspeed interconnects, are also possible, but have not been included to keep the analysis generic.
In addition to system configuration, certain user-settable parameters are also significant factors affecting FDDI-based EMS performance. The variables may be assigned by the system manager based on the traffic distribution and application requirements, and include T _ Opr, asynchronous priorities and thresholds. In this study we vary these values to evaluate their impact on system performance.
An EMS has several stringent real-time requirements. For example, digital status data should be processed within 10 ms. These real-time requirements must be met even under the worst-case loading conditions, since the EMS still must be capable of performing its basic function, controlling the power system . . Our simulations focus on the worst-case, short-term behavior of the system under peak loading conditions, rather than the average performance over a longer observation period[6]. While a network may have acceptable average performance, it may fail to meet stringent real-time requirements.
4.2 Traffic
Our study is based on EMS requirements typical of a medium-sized utility, with 50 dispatchable generator units, 500 buses in the internal network model, and 500 buses in the external network model.
We assume that during peak conditions, study functions are suspended. The front-end processor polls the RTUs every 2 seconds, acquiring analog and digital status data which is sent to the SCADAdatabase processor(DB). The state estimation(SE) results from the n~twork processor(NET) are sent to DB; this process is assumeg to occur every minute in the base case. The output from AGC and ED in the generation processor(GEN) is also sent to DB. Alarm data is sent from DB to Consoles; this occurrence is assumed random, with a uniform distribution. The Consoles send alarm acknowledgments to DB. There is also data from DB to Consoles for updating the local database. Here, we assume that the picture data is stored in the local database of the Consoles, consistent with the client-server architecture of present-day fullgraphics MMI. Supervisory control requires data to be sent from the Consoles to FEP; this traffic is assumed
869
random with a Poisson distribution. Other background traffic includes that from DB to GEN and DB to NEf. For the configuration where a dedicated workstation is used for state estimation, we assume that 150 kbytes are transmitted from SE to NET after each execution of SE. The packet rate and length of each type of traffic are given in Table 1.
For traffic modeled as random, the packet rate is to be interpreted as its expected value. In this paper, we study EMS performance with both synchronous and asynchronous traffic; however, we focus primarily on the delay of real-time data, such as digital status data from FFP to DB.
DB Console NET GEN FEP
FEP 4000/scc (digital) (8)
FEP SOO/sec ( andog) (12)
NET l/min (!SOit)
GEN If]. ICC
(AGC) (lk)
GEN l/S sec (ED) (800)
DB 10-IS/scc l/8scc S/scc (100) (SOk) (100)
Console S/sec l/SOscc l/2Ssec (100) (200)
Table 1 Traffic in Distributed EMS Study
{ Packet rate in packets per time interval ( packet length in bytes ) }
S. Simulation Results
(100)
A 30-second period is used to ensure that only one peak traffic load occurs when SE data are sent from NET to DB. Our observ~tion period is defined as a one-second period which contains the peak traffic load.
The average and maximum delay values obtained ' during our observation period for the base case are shown
in Table 2.
SCADA data SE (FEP to DB) (NET to DB)
Average 0.429 ms 6.4 ms Delay
Maximum 7.834 ms 13.6ms Delay
T_Opr = 8ms
Table 2 Delay for the Base Case
We define the frame delay as the duration of time since the frame was generated until it is received by the destination. The average and maximum delays are well within real-time requirements (10 ms for digital data and 100 ms for analog data).
The digital delay distribution over the observation period for the base case is shown in Figure 4.
~lay (ms)
8.0
4.0
0.15 1.25 I.SO Tone(scc)
Figure 4 Packet Delay of Digital Data
The peak delay is a result of the transmission of a large amount of data from NEf to DB. However, the maximum delay is always bounded above by T_Opr. This demonstrates FDDI's ability to guarantee response time by its bandwidth allocation scheme.
We also examine the maximum delay of two types of traffic: one consisting of small packets with fixed interarrival times(FEP to DB), and another, a large amount of data generated over a short period of time(NEf to DB). For a fixed asynchronous threshold value, the delays are unaffected by increasing the traffic load from NEf to DB, and are in fact bounded by T_Pri. The results are shown in Figure 5.
ll•
Del17 •• ~ NET·>DB (111) •• -- FEP·>DB
»• If•
•• 151 • 4!1 "'
Packet Size ( kbJte) Figure 5 Maximum Delay vs. Load
870
Figures 6 and 7 show the maximum delay of traffic from · NEf to DB for different values of T_Opr and T_Pri,
respectively. A comparison of these two graphs reveal that both T_Opr and T_Pri may be used to control the maximum response time of the system. It should be noted that T_Opr is a system-dependent variable while T_Pri varies from station to station and may be assigned to optimize EMS perfonnance.
Dd17 (•)
Delay (••)
~ NET·>DB
••
... II 1J u "
T_OprV•e (•)
Figure 6 Effect of T_Opr on Packet Delay
~ NET·>DB 25JD -- FF.l'·>DB .. IUD
IOJll T_Opr:8111
S.IO
... u u u "
T_PrlValH (ms)
Figure 7 Effect of T_Pri on Packet Delay
We run the SE and AGC functions in a single processor and assign different priority levels to the corresponding traffic loads. The results are listed in Table 3. From this example we observe that the use of priorities can improve the perfonnance of time-stringent applications such as AGC.
- .. . Digital data AGC SE (FEP to DB) (GEN-> DB) (NET-> DB)
Average 0.701 ms 6.202ms 18.546 ms
Delay
Maximum 7.878 ms 12.326 ms 24.715 ms
Delay
T_Opr = 8 ms
Table 3 SE and AGC in One Processor
Next we investigate the effects of assigning different asynchronous thresholds to EMS applications running distributedly. We assume that SE and AGC, two applications running in separate processors, transmit large amounts of data to DB simultaneously. In the first case, the asynchronous priority thresholds of the generated traffic for both applications are set equal to zero ms. In the second case, we reduce the available asynchronous bandwidth for SE by setting its T_Pri to 7.5 ms; T_Pri for AGC is set to 4 ms. As shown in the results listed in Table 4, appropriate settings ofT _Pri can significantly improve the perfonnance of time-stringent applications such as AGC.
Case I Case II
SE AGC SE AGC
Average 8.1 ms 16.1 ms 40.8 ms 14.1 ms Delay
Maximum 21.9 ms 30.0 ms 52.8 ms 24.9 ms
Delay
Case I: T_Opr = 8ms, T_Pri (SE)= 0 ms, T_pri(AGC) = 0 ms
Case II; T_Opr = 8ms, T_Pri(SE) = 7.5ms, T_Pri(AGC) = 4 ms
Table 4 Delay with Different T_pri Values
6. Conclusion
We have developed a LAN perfonnance simulation tool to evaluate the instantaneous network perfonnance of a FDDI-based distributed EMS under peak loading conditions. The FDDI network provides exceptional realtime performance for a distributed EMS under nonnal operating conditions and is capable of supporting traffic loads of much higher magnitude without performance degradation. We have shown the effects of varying several FDDI protocol parameters on system perfonnance.
Future work includes enhancing the existing LAN models to simulate the behavior of distributed EMSs
871
under various failure conditions, the development of additional LAN models, the further analysis of the relationship between MAC protocol parameters and ring perfonnance, and the tuning ofFDDI network parameters to optimize FDDI-based EMS performance.
Acknowledgment
We thank Dr. Amitava Sen and Mr. Bob Bum of ABB Systems Control for providing the traffic data and performance requirements used in this paper.
References
[l] L.Murphy and F.F. Wu,' An Open Design Approach for Distributed Energy Management Systems', paper 92SM447-3 presented at IEEE/PES Summer Meeting, Seattle, WA, July 12-16, 1992.
[2] G. Ockwell and R. Kreger, 'The Impact of Hardware on Open Architecture Design', paper 92WM159-4, presented at IEEE /PES Winter Meeting, New York, Jan. 26-30, 1992.
[3] R. Podmore, 'Criteria for Evaluating Open Energy Management Systems', paper 92WM157-8, presented at IEEE/PES Winter Meeting, New York, Jan. 26-30, 1992.
,~
[4] M. Kunugi, M. Yohda,etal, 'Performance Val~dation of a Functionally Distributed Energy Management Architecture', IEEE Trans. on Power Systems, Vol. 7, No. 2, pp. 820-827, May 1992.
[5] K. Kato and H.R. Fudeh, 'Performance Simulation of Distributed Energy Management Systems', IEEE Trans. on Power Systems, Vol. 7, No. 2, pp. 828-834, May 1992.
[6] S.M. Lun, F.F. Wu et al, 'Netplan: An Integrated Network Planning Environment', Int. Workshop on Modeling, Analysis, and Simulation of Computer and Telecomm. Systems, Jan. 1993.
[7] S.M. Lun, T. Lo, F. F. Wu et al. 'LANSIM and Its Applications to Distributed EMS', 1993 IEEE PICA Conference.
[8] J. Buck, S. Ha, E.A. Lee, and D.G. Messerschmitt, 'Ptolemy: A Platform for Heterogeneous Simulation and Prototyping,' Proc. European Simulation Conf., Copenhagen, June 1991.
[9] J. Walrand, Communication Networks: A First Course, Aksen Associates, 1991.
[10] American National Standards Institute, 'FDDI Token Ring Media Access Control,' American National Standard, ASC X3T9.5, 1986.
[11] R. Sankar and Y.Y. Yang, 'Performance Analysis of FDDI,' 14th Conf. of Local Computer Networks, pp. 328-332, June 1989.