9
INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, VOL. 9, I5 1 - 159 ( 1996) AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING DAVID GOODALL Communications Unit, Universiry of New South Wales, Sydney 2052 Australia SUMMARY This paper describes a proposed efficient integrated medium access control scheme for a high-speed unidirectional slotted ring which employs destination release. The mechanism uses separate cycles of access for synchronous and asynchronous traffic and is suitable for variable bit rate (VBR) traffic. The paper includes a comparison with the Orwell protocol under various simulated traffic loads. The use of separate cycles of access was found to be more efficient than the sharing of a single access cycle, at the cost of two bits in the cell header for each cycle, an insertion buffer for each cycle apart from the highest priority cycle, and associated node complexity. The proposed mechanism was able to meet delay requirements for synchronous traffic while maintaining high levels of overall utilization, thus making it suitable for use in a ring-based asynchronous transfer mode (ATM)-compatible local area network (LAN). KEY WORDS: ATM; MAC; slotted ring; VBR 1. INTRODUCTION The topology of a unidirectional slotted ring is shown in Figure 1. Established standards for slotted rings include the ISO/IEC 8802-5 Token Ring' and the fibre distributed data interface (FDDI).* These support a variable packet size and were not orig- inally designed for synchronous traffic. Slotted rings which are under standardization include the asyn- chronous transfer mode ring (ATMR),3 and O r ~ e l l . ~ In general these are designed to support both synchronous and asynchronous traffic, i.e. integrated traffic, carried in asynchronous transfer mode (ATM) cells. In supporting integrated traffic in a slotted ring which employs a single access cycle as a fairness mechanism, such as Orwell or ATMR, overall utiliz- ation and particularly the efficiency of asynchronous transmission can be reduced. This problem is caused by the need to give priority to synchronous traffic in the proportional sharing of the cycle, the require- - U D Figure 1. Topology of a unidirectional slotted ring ~~~~ *This work was carried out while the author was with the Computer and Systems Technology Laboratory at UNSW. ment for short access cycles to limit cell delay, and because all traffic is delivered in an end-to-end fashion. This paper addresses the problem of provid- ing an access cycle-based medium access control (MAC) mechanism for a unidirectional slotted ring which will meet delay requirements for variable bit rate (VBR) synchronous traffic, but which is still efficient for asynchronous traffic. Suitability for VBR traffic distinguishes this mechanism from our previous work on the AMNET pr~ject.~,~ AMNET (A Multimedia NETwork) employs a table mechanism for guaranteeing band- width to constant bit rate (CBR) traffic sources8 and allows only asynchronous sources to make use of leftover synchronous bandwidth. The mechanism described in this paper uses a separate centrally controlled access cycle for each class of traffic. This allows fair sharing of bandwidth between the sources of each traffic class. The protocol is an extension of the work described in Goodall and Burston? and uses the same idea of end-to-end transmission for synchronous traffic and hop-by-hop transmission for asynchronous traffic by use of an insertion buffer. This is purely a simulation study as our implementation resources are currently allocated to the AMNET project. Background on the use of access cycles in slotted rings is given in Section 2. The design of the proposed protocol is presented in Section 3. Section 4 discusses the simulation results. The simulations compared the proposed scheme with the Orwell protocol. The use of separate access cycles was found to be more efficient than the sharing of a single access cycle in meeting delay requirements and sustaining high utilization under integrated traffic loads, particularly asymmetrical loads. Conclusions and possible future directions for CCC 1074-535 1 /96/030159-09 0 1996 by John Wiley & Sons, Ltd. Received October 1995 Revised May 1996

AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

Embed Size (px)

Citation preview

Page 1: AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, VOL. 9, I5 1 - 159 ( 1996)

AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

DAVID GOODALL Communications Unit, Universiry of New South Wales, Sydney 2052 Australia

SUMMARY

This paper describes a proposed efficient integrated medium access control scheme for a high-speed unidirectional slotted ring which employs destination release. The mechanism uses separate cycles of access for synchronous and asynchronous traffic and is suitable for variable bit rate (VBR) traffic. The paper includes a comparison with the Orwell protocol under various simulated traffic loads. The use of separate cycles of access was found to be more efficient than the sharing of a single access cycle, at the cost of two bits in the cell header for each cycle, an insertion buffer for each cycle apart from the highest priority cycle, and associated node complexity. The proposed mechanism was able to meet delay requirements for synchronous traffic while maintaining high levels of overall utilization, thus making it suitable for use in a ring-based asynchronous transfer mode (ATM)-compatible local area network (LAN).

KEY WORDS: ATM; MAC; slotted ring; VBR

1. INTRODUCTION

The topology of a unidirectional slotted ring is shown in Figure 1. Established standards for slotted rings include the ISO/IEC 8802-5 Token Ring' and the fibre distributed data interface (FDDI).* These support a variable packet size and were not orig- inally designed for synchronous traffic. Slotted rings which are under standardization include the asyn- chronous transfer mode ring (ATMR),3 and O r ~ e l l . ~ In general these are designed to support both synchronous and asynchronous traffic, i.e. integrated traffic, carried in asynchronous transfer mode (ATM) cells.

In supporting integrated traffic in a slotted ring which employs a single access cycle as a fairness mechanism, such as Orwell or ATMR, overall utiliz- ation and particularly the efficiency of asynchronous transmission can be reduced. This problem is caused by the need to give priority to synchronous traffic in the proportional sharing of the cycle, the require- -

U D

Figure 1 . Topology of a unidirectional slotted ring

~~~~

*This work was carried out while the author was with the Computer and Systems Technology Laboratory at UNSW.

ment for short access cycles to limit cell delay, and because all traffic is delivered in an end-to-end fashion. This paper addresses the problem of provid- ing an access cycle-based medium access control (MAC) mechanism for a unidirectional slotted ring which will meet delay requirements for variable bit rate (VBR) synchronous traffic, but which is still efficient for asynchronous traffic.

Suitability for VBR traffic distinguishes this mechanism from our previous work on the AMNET p r ~ j e c t . ~ , ~ AMNET (A Multimedia NETwork) employs a table mechanism for guaranteeing band- width to constant bit rate (CBR) traffic sources8 and allows only asynchronous sources to make use of leftover synchronous bandwidth. The mechanism described in this paper uses a separate centrally controlled access cycle for each class of traffic. This allows fair sharing of bandwidth between the sources of each traffic class. The protocol is an extension of the work described in Goodall and Burston? and uses the same idea of end-to-end transmission for synchronous traffic and hop-by-hop transmission for asynchronous traffic by use of an insertion buffer.

This is purely a simulation study as our implementation resources are currently allocated to the AMNET project. Background on the use of access cycles in slotted rings is given in Section 2. The design of the proposed protocol is presented in Section 3. Section 4 discusses the simulation results. The simulations compared the proposed scheme with the Orwell protocol. The use of separate access cycles was found to be more efficient than the sharing of a single access cycle in meeting delay requirements and sustaining high utilization under integrated traffic loads, particularly asymmetrical loads. Conclusions and possible future directions for

CCC 1074-535 1 /96/030159-09 0 1996 by John Wiley & Sons, Ltd.

Received October 1995 Revised May 1996

Page 2: AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

152 D. GOODALL

this research are presented in Section 5. The Appen- dix contains a more formal description of the algor- ithms used by the mechanism.

2. BACKGROUND

In an access cycle scheme each node is given an allocation which represents the number of cells which it may send in any cycle. A new cycle starts when all nodes have either sent their allocation or have nothing to send. The cycle may be under central c ~ n t r o l ? ~ ~ or distributed c ~ n t r o l . ~ - ~ ~ ' ~ ~ ' The advantage of distributed control is the shorter time required to detect the end-of-cycle condition, and therefore shorter pauses in transmission. Theoreti- cally during end-of-cycle detection no cells are sent and utilization is reduced.

The scheme described by Rubin and Wu" employs a local fairness mechanism which can use available bandwidth at this time. The scheme described by Goodall and Burston' uses an elastic insertion buffer, called the upstream buffer, which achieves a similar result. A pipeline effect is created which increases utilization by allowing end-of-cycle detection while, potentially, cells are still being sent from the upstream buffer at each node.

The mechanisms used in ATMR,3 Orwell," BCMA," and FECCA" share a single access cycle among all traffic classes. Priority is effected within terminals by having separate queues for each pri- ority, separate allocation limits per priority, and by sending the highest priority data first, where that traffic class has not reached its allocation limit, The effect of trying to send both synchronous and asynchronous traffic with a single access cycle is to lower the throughput of asynchronous traffic. This is because the cycle must be short and the per- priority slice of allocation must favour synchronous traffic to meet delay requirements. That is, if the allocation limit for the node is 5 , then perhaps synchronous sources will be allowed to send four cells per cycle and asynchronous sources one cell per cycle, under heavy loads.

In addition, in a single access cycle scheme if VBR traffic extends the cycle length beyond a set delay limit, an automatic reset scheme must be employed to pre-empt asynchronous traffic so that guaranteed delay times for synchronous traffic can be met.3*5 In Orwell it seems possible that this may introduce unfairness into the management of asynchronous traffic, because the access cycle may be terminated at any time and all bandwidth given over to synchronous traffic for the period of conges- tion. When transmission for asynchronous traffic restarts, all nodes have their asynchronous allocation refreshed. In ATMR3 sub-cycles for different levels of priority are nested. When an automatic reset occurs, only the highest priority sub-cycle continues whereas lower priority sub-cycles are paused. As congestion eases sub-cycles resume in order of pri- ority. Because lower priority sub-cycles are paused

rather than terminated and refreshed, fairness should be preserved for the affected traffic class. Potential utilization in both schemes is reduced because no lower priority traffic is sent until congestion eases.

One answer to the overall problem is to increase the speed of the network, for example to gigabit rates. Thus access cycles can be longer in terms of the number of cells sent in any given period. This restores utilization for an equivalent latency. As an alternative, the scheme presented in this paper explores the use of a separate access cycle for the different classes of synchronous and asynchronous traffic. Because these cycles are not nested and the lower priority cycle may continue while any band- width is available, we have called the mechanism concurrent cycles of access (CCA). The access cycle for synchronous traffic can be made short to ensure that delay for synchronous traffic is kept below a limit, whereas the asynchronous access cycle can be made longer to allow higher utilization.

The asynchronous access cycle is buffered to allow pre-emption by synchronous traffic. The buff- ered access cycle (BAC) was introduced by Goodall and Burston? and is an extension of a centrally controlled access cycle fairness mechanism to a buffer insertion ring. It allows synchronous traffic to be delivered end-to-end, and asynchronous traffic hop-by-hop, via the insertion buffer. This increases send opportunities for the latter class, and thus increases overall utilization. The next Section describes the CCA protocol in more detail.

3. PROTOCOL DESCRIPTION Both Orwell and CCA employ destination release and are based on a 56-byte slot, which consists of a 3-byte slot header and a 53-byte ATM cell. Orwell supports four levels of priority, but in this study just two classes of synchronous and asynchronous traffic were considered. The Orwell simulator was based on the flow chart of the MAC algorithm for a non-monitor node found in Falconer and Adams; except that a node may transmit in a slot that it has received and released rather than passing the empty slot to the next node. This change allows higher utilization and is incorporated in the latest reported Orwell design in ad am^.^

In CCA the 3-byte slot header is made up of a delimiter byte, an access control byte and a desti- nation address byte. The access control byte is shown in Table I. The payload priority bit indicates

TableI. CCA access control byte

Full/empty Payload priority Synchronous cycle continuance Synchronous cycle reset Asynchronous cycle continuance Asynchronous cycle reset Monitor Spare

Page 3: AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

ACCESS FOR VBR TRAFFIC ON A SLOTTED RING 153

whether the cell has synchronous or asynchronous priority. The continuance and reset bits for each cycle are explained below. The monitor bit is required to allow the ring controller to detect and strip full cells which have not been received after a complete slot revolution. Depending on how much delay is tolerable per node, it may be possible to derive the payload priority and the destination address from the virtual circuit number in the ATM cell header. These potential savings were not con- sidered in the simulations for the sake of compari- son.

Appendix I contains a formal description of the CCA algorithms. Informally, the access cycle mech- anism for synchronous traffic works as follows. All nodes are given an allocation limit of synchronous cells which can be sent during a cycle. Whenever the controller issues a slot, it resets the synchronous cycle continuance bit. Nodes may transmit if they receive an empty slot or if they receive and strip a full slot addressed to themselves, provided that they have not exhausted their allocation and have some- thing to send. Whenever a synchronous cell is sent an allocation counter within the node is incremented. A node sets the synchronous cycle continuance bit of each passing slot if the counter has not yet reached the allocation limit and it has something to send. When the controller receives the slot, if the continuance bit has not been set by any node, it starts a new cycle by asserting the synchronous cycle reset bit provided that it is not already reset- ting the cycle. When a node receives a slot in which the synchronous cycle reset bit is asserted, it resets its synchronous allocation counter.

The asynchronous access cycle is a buffered access cycle (BAC).7 From the controller’s point of view this cycle works in the same way as the synchronous access cycle. From the node’s point of view it is slightly different. Each node contains an insertion buffer, called the upstream buffer, which allows locally generated synchronous traffic to pre- empt asynchronous traffic already in the ring. The depth of the upstream buffer is incorporated into the asynchronous access cycle mechanism so that the node will prolong the asynchronous cycle by setting the slot’s asynchronous cycle continuance bit, while the node has asynchronous allocation and something to send or while its upstream buffer is over a threshold.

While the depth of the upstream buffer is below the threshold, the node’s own asynchronous traffic has priority over the contents of the buffer. The situation is reversed once the threshold is reached or exceeded. A timeout counter is required to ensure that cells are not stranded in the upstream buffer in the case when there is an absence of further asyn- chronous traffic from upstream, the buffer is below the threshold, and the node is using all available passing slots for its own asynchronous traffic. The algorithms in Appendix I include the necessary actions with respect to the timeout counter. Timeouts

do not occur in the simulations, as these are based on continuous traffic arrivals. Timeouts do not allow pre-emption of synchronous traffic by asynchro- nous traffic.

The maximum depth that the upstream buffer will reach at a bottleneck node is determined as f01lows:~

MaxBufferSize = N( 1-K)* Allocation + Threshold

where N is the number of nodes attempting to send beyond the bottleneck, and K is the minimum bandwidth available to asynchronous traffic at the bottleneck node, expressed as a fraction. For 256 nodes, an allocation of 20 and a threshold of 10, the maximum buffer size required at a bottleneck node in the absence of bandwidth is 5130 cells in depth, or 271 890 bytes, assuming that 53-byte ATM cells are buffered. For a ring supporting a 140 Mbp/s bit rate this figure can be met with currently available video ram products. Examples of bottle- neck situations are shown in Section 4, with resulting upstream buffer depths summarized in Table 1V. It is possible to increase utilization by increasing the size of the buffer threshold. This creates more of a pipeline effect at the expense of latency for lightly loaded nodes sandwiched between heavily loaded nodes.7 It should be noted that the length of timeouts in upper layer asynchronous pro- tocols may determine the depth of the upstream buffer which is practical before cells are assumed lost. This will require further investigation with a working system.

It is possible to implement further levels of pri- ority by using a separate access cycle for each level. The cost of each extra cycle is 2 bits in each slot header, and at each node an insertion buffer and the design complexity associated with controlling the buffer and routing its data. Figure 2 shows the insertion buffers and multiplexing required to sup- port four access cycles. Two access cycles are prob- ably adequate for a local area network (LAN), and require only a single insertion buffer.

4. SIMULATION RESULTS

This section shows the results from simulations of Orwell and CCA under various traffic loads. The number of nodes for the simulations was 24 and the ring contained six slots. Each point on the graphs was produced from 312500 iterations of the simulator and represents one second of real time at a bit rate of 140 Mbp/s. Synchronous traffic sources were modelled as CBR sources to avoid the necess- ity of simulating the automatic reset mechanism in Orwell. Each synchronous source was started at a randomly selected time.

Asynchronous traffic sources were modelled as being approximately self-similar. 12~’3*14 The source at each node was produced by multiplexing eight fixed-rate on-off sub-sources, where the on-off times were drawn from a Pareto distribution with a

Page 4: AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

154 D. GOODALL

From Ring I

Class 1 B

I

Class 1 b

Class 2

B

From Ring I I

To

b

Class 2 b

Class 3

Class 4

Reception of incoming cells Reception of incoming cells

b

Class 3

Class 4

- -

Ring -

+ ’

Local cells for transmission

Figure 2. Insertion buffers and multiplexing to support four access cycles

shape parameter of 1.0 and a minimum on-off time of four slots. Theoretically, using this method self- similar traffic is produced from N multiplexed sources.12 On the graphs which follow the asynchro- nous traffic loads are averages drawn from a file which recorded the load generated in each slot time.

4. I . Asynchronous rrafic loads

In these experiments the performance of CCA was compared with Orwell and a first-come-first- served (FCFS) discipline. The allocation limit for CCA and Orwell was 25. For CCA the threshold of the upstream buffer was set at 10. Figure 3 shows results from applying a symmetrical asyn- chronous load. For an evenly distributed load, the advantage of CCA over Orwell is slight. The results in Figure 4 show a clearer increase in efficiency.

This graph was produced by applying an asym- metrical load, where node 17 produced one third of the total. The changing shape of the curves is due to the changing proportion of traffic delivered by node 17. The proportion from this node was reduced as the other nodes increased their contribution, as the overall load increased

For these experiments addressing was random. Where the traffic pattern is very local in nature, a FCFS protocol will achieve a higher utilization and lower average delays than CCA, but possibly at the cost of starvation for certain nodes. The level of utilization achieved under an average load of 2.0 for the above experiments is summarized in Table 11.

4.2. Integrated loads In these experiments the performance of Orwell

and CCA is compared under an integrated load consisting of 50 per cent CBR and 50 per cent asynchronous traffic. Figure 5 shows results from applying a symmetrical load. For both protocols the allocation limits were 15 for synchronous traffic and 10 for asynchronous traffic. In Orwell’s normal operation the allocation limit for asynchronous traffic would have been progressively reduced as the over- all load increased beyond 2.0 to maintain the delay limits for synchronous traffic. In CCA the delay experienced by one priority class was not affected by the size of the other priority’s allocation limit. Delays within any priority class were only affected by the amount of bandwidth available to that class. Therefore the delays for synchronous traffic were lower than for Orwell. Asynchronous traffic reached a higher level of utilization due to the hop-by-hop method of transmission.

Figure 6 shows results from applying an asym- metrical load, where node 17 provided one third of the overall load for both traffic classes. In this experiment the allocation limits for CCA were 46 for synchronous traffic at node 17, four for synchronous traffic at all other nodes, and 20 for asynchronous traffic at all nodes. For Orwell, the allocation limits were the same as for CCA until the overall load reached approximately 1-5. After this point the asynchronous allocation limit was dropped to one for each node, to maintain low delays for synchronous traffic.

Page 5: AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

ACCESS FOR VBR TRAFFIC ON A SLOTTED RING 155

P) M P) 2 2

FCFS -*--- Orwell-async -+--

CCA-async -8-

Figure 3. Symmetrical asynchronous load

Figure 4. Asymmetrical asynchronous load

In Orwell's normal operation such transitions would be managed by a process monitoring the reset rate, and would be less abrupt. In Figure 6, to maintain the shape of the curve the delay values at the change point for Orwell have been omitted. The allocation limits for asynchronous traffic in this

Orwell 1.89 1.52 experiment were equal to maintain strict fairness. CCA 1 -96 1 -73 Provided that low delays for synchronous traffic FCFS 2.00 1.74 were maintained, a higher allocation could be given

to asynchronous traffic at node 17 thus lowering average delays for all asynchronous traffic. CCA maintained fairness within each priority class.

Table 11. Utilization under asynchronous loads

Protocol Symmetrical Asymmetrical load load

Page 6: AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

156 D. GOODALL

h

* VI 0 - 2

Figure 7 shows results from the application of a 'starvation load'. A virtual circuit between node 0 and node 23 used 50 per cent of available bandwidth for synchronous traffic. The rest of the bandwidth was available to a symmetrical asynchronous load provided by all nodes. In this experiment the maximum delay for synchronous traffic was kept below 500 ms. In Orwell this was accomplished by keeping the total allocation for all nodes, for all traffic classes, below 156. Thus for Orwell, node 0 had an allocation of 78 for synchronous traffic and

all nodes had allocations of 3 for asynchronous traffic. For CCA the allocation limits were 15 for synchronous traffic at node 17, and 10 for asynchro- nous traffic at all nodes. Because the access cycles in CCA are independent, and the synchronous load was constant, the delay for synchronous traffic was also constant.

Table 111 summarizes the utilization achieved in the above experiments under average integrated loads of 2.0. The results show that employing separ- ate cycles for each class of traffic is more efficient

Page 7: AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

ACCESS FOR VBR TRAFFIC ON A SLOTTED RING 157

Orwell-sync -+---

Orwell-async -0---

CCA-sync -+-- CCA-async -&---

Table 111. Utilization under integrated loads

Protocol Symmetrical Asymmetrical Starvation load load load (Figure 7)

(Figure 5) (Figure 6 )

CCA - sync 1.00 1 .00 0.50 CCA - async 0.96 0.69 0.98

sync 1 .00 1 .oo 0.50

async 0.84 0.39 0.80

Orwell -

Orwell -

than sharing a single access cycle, under the simu- lated conditions.

4.3. Upstream buffer pe@ormance

LJpstream buffer performance is summarized in Table IV. The potential depth of the buffer is of most concern in CCA and is greatest at nodes producing the highest synchronous load. The for-

Table IV. Upstream buffer performance

Load type Average Depth Affected load (cells) node

Symmetrical (Figure 5 ) 1.99 17 Any node Asymmetrical 1.98 108 17 (Figure 6) Starvation 1.49 23 0 (Figure 7)

mula in Section 3 gives the maximum potential buffer depth. Under purely asynchronous loads no upstream buffer fills beyond the threshold, because no pre-emption by higher priority traffic takes place.

The extra efficiency provided by the buffer will be wasted if higher layer asynchronous protocols timeout due to extra delays during congested per- iods. Control of call admission, through monitoring of the asynchronous cycle reset rate, could be used to avoid this problem and achieve a smaller upstream buffer size. Therefore managing the ring would require many of the ideas originating from Orwell.

5 . CONCLUSIONS AND FUTURE DIRECTIONS

This paper has described the CCA MAC protocol for a unidirectional slotted ring. Under simulation the protocol achieved low delays for synchronous traffic which were unaffected by the asynchronous traffic load. Relatively high efficiencies were achi- eved for asynchronous traffic, due to the use of a buffered access cycle which allowed hop-by-hop transmission and created a pipeline effect. The use of separate access cycles for support of integrated traffic was found to be more efficient than the use of a single shared access cycle, particularly under asymmetrical loads.

The cost of each access cycle is 2 bits in each slot header, and an insertion buffer for each cycle apart from the highest priority one at each node. There are potential cost savings in the removal of an automatic reset scheme to ensure that asynchro- nous traffic does not extend cycles and hence delays for synchronous traffic. However, when dealing with VBR synchronous traffic within its own priority

Page 8: AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

I58 D. GOODALL

Node-Receive ( (

if (slot is full) (

if (destination = t h i s node) ( copy-cellO mark-slot-empty0

1 else if ( c e l l has asynchronous priority) (

add-cell-to-upstream_buffer-queueO mark-slot-empty0

1 1 if (synchronous cycle reset b i t is asserted) (

1 if (asynchronous cycle reset bit is asserted) (

1

zero-sync-allocation-ctr0

zero-async-allocation-ctr0

1

Figure 8. Algorithm for node to receive

class, some form of automatic reset may still be necessary to achieve the highest efficiency.

The mechanism appears suitable for a high-speed ring-based ATM compatible LAN supporting two basic classes of traffic. CCA may also be suitable for a MAN, or a Gb/s LAN, depending on the cost of deeper upstream buffers. That is, the size of the upstream buffer is directly related to the size of the allocation limit, which tends to increase as ring speed increases. Further simulation studies are required to see how well CCA scales. Finally, CCA

Node-Transmit0 (

if

1

if

l if

1

if

can be adapted to a dual bus topology by reversing the direction of the cycle continuance and reset bits in the slot header. This will be the subject of further research.

ACKNOWLEDGEMENTS

I thank Keith Burston for reading the original manu- script, and the referees for their comments and the choice of title. Thanks also go to Vern Paxon for advice on the simulator framework. Any errors are the responsibility of the author.

APPENDIX I

With respect to both synchronous and asynchronous cycles, the actions of a node upon receipt of a slot are shown in Figure 8. After receiving the slot, the node checks to see whether it can transmit. As shown in Figure 9, it sets the continuance bit for the synchronous and/or asynchronous cycles as necessary. The algorithm used by the controller when processing a slot and managing the access cycles is shown in Figure 10.

(slot-empty and sync-allocation-ctr < sync-limit and sync-queue > 0 ) { send-synchronous-cell0 set-slot-full0 increment-sync-allocation-ctr0

(sync-allocation-ctr < sync-limit and sync-queue > 0 ) ( assert-synchronous-cycle-continuance-bit0

(slot-empty) ( if (upstream-timeout-ctr = 0 or upstream-queue > threshold or

(upstream-queue > 0 and (async-allocation-ctr = async-limit or async-queue = 0))) { send-asynchronous-cell-from-upstream-queue() set-slot-full() upstream-timeout-ctr = async-allocation + 1

1 else if (async-allocation-ctr < async-limit and async-queue > 0 ) {

send-asynchronous-cell-from-local-queue() set-slot-full ( 1 increment_async-allocation-ctr() if (upstrear-queue > 0 ) {

)

decrement-upstream-timeout-ctr0

1

(upstream-queue > threshold or (async-allocation-ctr < async-limit and async-queue > 0 ) ) ( assert-async-cycle-continuance-bit()

Figure9. Algorithm for node to transmit

Page 9: AN EFFICIENT MEDIUM ACCESS SCHEME TO SUPPORT VBR TRAFFIC ON A SLOTTED RING

ACCESS FOR VBR TRAFFIC ON A SLOTTED RING 159

Controller-Process-Slot0 (

if (synchronous reset bit is asserted) (

deassert-synchronous-cycle-reset-bit0 resetting-sync-cycle = 0

} else if ((synchronous cycle continuance bit is unasserted) and

not(resetting-sync-cycle)) ( assert-synchronous-cycle-reset-bit0 resetting-sync-cycle = 1

1 deassert-synchronous-cycle-continuance-bit0

if (asynchronous reset bit is asserted) { deassert-asynchronous-cycle-reset-bit0 resetting-async-cycle = 0

I else if ((asynchronous cycle continuance bit is unasserted) and

not(resetting-async-cycle)) { assert-asynchronous-cycle-reset-bit0 resetting-async-cycle = 1

1

deassert-asynchronous-cycle-continuance-bit0 1

Figure 10. Algorithm for controller to process slot

REFERENCES

1. ISOlIEC 8802-5 [ANSVIEEE Std 802.51 Token Ring Access Method and Physical Layer Layer Specifications ( 1995).

2. F. E. Ross, ‘FDDI - a Tutorial’, IEEE Comm. Mag., 24,

3. K. Imai, T. Ito, H. Kasahara, and N. Morita, ‘ATMR: asynchronous transfer mode ring protocol’, Computer Net- works and ISDN Systems, 26, (6-8), 785-798 (1994).

4. R. M. Falconer and J. L Adarns, ‘Orwell: a protocol for an integrated services local network’, British Telecom Tech J., 3, (4), 27-35 (1985).

5. J. L. Adarns, ‘Orwell’, Computer Networks and ISDN Sys- [ems, 26, (6-8), 771-784 (1994).

6. A. K. Burston, ‘An Architecture for a Broadband LAN’, 2nd Int. ConJ: Private Switching Systems and Networks, IEE Conference Publication No. 357, pp. 169-1 76 London, 1992.

7. D. S. Goodall and A. K. Burston, ‘Medium access control fsr asynchronous traffic in the AMNET LAN’, Int. J . Comm.

8 . D. S Goodall and A. K. Burston, Medium Access Control j i )r Synchronous Trafic in the AMNET LAN, Technical Report 9501. School of Computer Science and Engineering, University of New South Wales, Sydney 2052 Australia.

9. A. A. Lazar, A. T. Temple, and R. Gidron, ‘MAGNET 11: a metropolitan area network based on asynchronous time sharing’, IEEE J. Select. Arras in Comm., 8, (8), 1582- 1594 (1990).

10. P. Heinzrnann, H. R. Muller, D. A. Pitt and H. R. van As, ‘Buffer-insertion Cell-synchronized Multiple Access (BCMA) on a Slotted Ring’, Proc. IFlP TC6/WG6.4. 2nd Int. Con$ Local Comm. Syst.: LAN and PBX, pp. 219-243 Palma, Spain, 1991.

( 5 ) . 10-17 (1986).

S y ~ t , 8, (3). 155-163 (1995).

11. I. Rubin and H. Wu, ‘FECCA - A New Access Algorithm for an ATM Ring Network with Destination Removal’, IEEE lnfocom ‘93, 1, (3D.3), pp. 368-375 San Francisco, USA 1993.

12. W. E. Leland, M. S. Taqqu, W. Willinger and D. V. Wilson, ‘On the self-similar nature of Ethernet traffic (extended version)’, IEEE/ACM Trans. Networking, 2, ( l ) , 1-15 (1994).

13. V. Paxson, ‘Empirically derived analytic models of wide- area TCP connections’, IEEE/ACM Trans. Networking, 2 ,

14. V. Paxon and S. Floyd, ‘Wide-Area Traffic: The Failure of Poisson Modeling’, Proc. Sigcomm’ 94, pp. 257-268 London, UK, 1994.

(4). 316-336 (1994).

Author’s biography:

David Goodall received a first class honours degree in architectural sci- ence from the University of New South Wales (UNSW) in 1990. He completed the requirements for a Masters in computer science at the same University in 1993, and is cur- rently working towards a PhD in computer science and engineering. His research interests include multi- media communication and computer

1/0 architecture. He is now with the Communications Unit at UNSW.