69
Thesis for the Degree of Master Two-differentiated Marking Strategies for TCP and UDP flows in a Differentiated Services Network by Sunghyuck Lee Department of Electronics Engineering Graduate School Korea University December 2001

Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

Thesis for the Degree of Master

Two-differentiated Marking Strategies for TCP and UDP flows in a Differentiated

Services Network

by

Sunghyuck Lee

Department of Electronics Engineering Graduate School Korea University

December 2001

姜姜姜姜 哲哲哲哲 熙熙熙熙 敎授指導敎授指導敎授指導敎授指導

碩士碩士碩士碩士 學位學位學位學位 論文論文論文論文

Two-differentiated Marking Strategies for TCP

and UDP flows in a Differentiated Services Network

이이이이 論文을論文을論文을論文을 工學工學工學工學 碩士學位碩士學位碩士學位碩士學位 論文으로論文으로論文으로論文으로 提出함提出함提出함提出함

2001年年年年 12月月月月

高麗大學校高麗大學校高麗大學校高麗大學校 大學院大學院大學院大學院

電子電子電子電子工學科工學科工學科工學科

李李李李 晟晟晟晟 赫赫赫赫

李晟赫李晟赫李晟赫李晟赫의의의의 工學工學工學工學 碩士學位碩士學位碩士學位碩士學位 論文論文論文論文 審査를審査를審査를審査를 完了함完了함完了함完了함

2001年年年年 12月月月月

委員長委員長委員長委員長 (印印印印)

委委委委 員員員員 (印印印印)

委委委委 員員員員 (印印印印)

i

ABSTRACT

Two-differentiated Marking Strategies for TCP and UDP flows in a Differentiated

Services Network

Sunghyuck Lee Advised by Prof Chulhee Kang

Dept of Electronics Engineering Graduate School Korea University

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) which are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) and the Assured Forwarding (AF) PHBs

The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

ii

There are two cruxes of issues identified in differentiated services (diffserv) networks One is TCP dynamics over Assured Services and the other is the interaction of TCP and UDP flows for Assured Forwarding Per Hop Behavior (PHB) First TCPrsquos complex response primarily to packet losses in a differentiated Services Network affects the Assured Services TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate for Assured Services When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it may not reach its reservation rate Therefore we argue the use of TCP-friendly building blocks (or modules) and fairness modules in the diffserv architecture regarding this issue and propose Two Markers System (TMS) that is able to properly mark packets and fairly share the bandwidth to each flow for their targeted sending rates TMS has two marking modules that are placed on the source and the edge of a diffserv network For sources of the network the virtual source maker play important roles of reducing TCP impacts in the assured services and Adequate marking packets Next in the edge of the network the edge embedded marker conducts new fairness policy based on the marking rate of flows from sources so called marking rate-based fairness

Second the other identified in the diffserv network is the effect of congestion insensitive flows such as UDP when they share the same AF class with TCP flows TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context Therefore we argue unfair distribution of excess bandwidth in an over-provisioned networks as well as unfair degradation in an under-provisioned network for TCP and UDP flows traffic

Finally we present simulation results to illustrate the effectiveness of TMS scheme over several parameters That is Two Markers System reduces TCP impacts over assured service and fairly shares the bottleneck link bandwidth of a network And then we also present simulation results to illustrate the effectiveness of TMS scheme over TCP and UDP interaction All the flows in Two Markers System somewhat fairly share the excess bandwidth and experience degradation in proportion to their target rates

iii

DEDICATION

To my parents Jongwang Lee and Okhee Kim

iv

ACKNOWLEDGEMENT

I am indebted to many individuals for their care and support given to me during my materrsquos course studies First of all I would like to express my deep gratitude to Professor Chulhee Kang As my thesis advisor he has provided me constant encouragement insightful comments and invaluable suggestions which benefited not only the completion of this thesis but also my career in a long time to come I also would like to thank Professors Jinwoo Park and Dooseop Eom for serving on my committee

I am also grateful to the seniors and the juniors of the Broadband Telecommunication Laboratory especially Seungjoon Seok spent a lot of time with me in the stage of my thesis research His advice and challenges have motivated me to do high quality research I warmly thank my colleagues in ITRC project Seungjin Lee and Sukmin Hong for numerous conversations continuous support and encouragement Special thanks go to Doctor Jihoon Lee who spent a lot of time with me on improving my presentation and technical writing skills and Sungkwan Youm who helped me a great deal from his insights in networking and from numerous discussions with him Thanks also go to my classmates Sungbum Joo and Jaehyung Ryu for their friendship and encouragement I would also like to acknowledge Sungwoo Ryu who made it possible for me to go through this difficult time I would like to recognize the following colleagues Deagun Kim Kyengyong Kang Kangwook Lee Changwoo Lee Chulwoo Kim Joosang Youn Junghyun Kim Hyunsook Park and Eunsun Park for their friendship enjoyable technical discussion and advice I am really pleased to be part of all these talented colleagues I am also thankful to my seniors Soohyun Cho Coben Han Taeho Jung Inyong Ha Dongkil Shin Sukun Yoon Yejin Song Kisun Lee Dongwook Seo Kyungseop Moon and Kiwon Hong for their advice and guidance

This research was supported in part by Information and Telecommunication Research Center (ITRC) contracts I am deeply grateful to Asian Pacific Advanced Network-Korea Consortium (APAN-KR) members for technical discussion on Differentiated Services related issues

v

I cannot find words strong enough to express my gratitude to my parents since they always have loved me believed in me and encouraged me in my study This thesis could not be completed without their presence beside me I am also deeply grateful to my elder sister Jimin Lee her husband Jin Kim and my lovable niece Hyun Kim for their dedicated sacrifice support and encouragement

Especially I indebted to my friend Moonhee Choi who helped me in many ways during the final stage of this study I am also thankful to my friends Taewan Kim Byeongwook Bae Yonghyun Jung Sangmok Han Byeongil Son Hongsik Park Hyoseop Lee Sungchul Choi Sungeun Ku Changjoo Yang Minsang Kim Sukhoon Byeon Dongchul Lee Jaekyun Lee Joonghyo Lee Jepyo Na Dukku Hwang and Jangyeon Kim for their everlasting friendship understanding and encouragement Last but not least I would like to thank to my valuable friend Jieun Ahn for her dedicated sacrifice continuous support understanding and encouragement

vi

TABLE OF CONTENTS

ABSTRACT i DEDICATION iii ACKNOWLEDGEMENT iv

LIST OF FIGURES viii LIST OF TABLES ix

ABBREVIATIONS x

CHAPTER 1 INTRODUCTION 1

CHAPTER 2 BACKGROUND 5

21 TCP and Active Queue Management5

22 Differentiated Services Architecture 10

221 Concepts in diffserv12

222 Diffserv ndashnetwork13

223 Service Realizations 14

23 Related Work24

CHAPTER 3 TWO MARKERS SYSTEM 25

31 The framework of Two Markers System25

32 The crux of fairness strategy 27

33 Adequate-Marking strategy Two Markers System-I algorithm 30

34 Marking rate-based Fairness strategy Two Markers System-II algorithm 34

35 The issue of re-marking 37

36 The issue of interaction between TCP and UDP flows 38

CHAPTER 4 SIMULATION AND ANALYSIS 40

41 TCP dynamics over Assured Services in a diffserv network 40

42 Interaction of TCP and UDP flows 46

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 2: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

姜姜姜姜 哲哲哲哲 熙熙熙熙 敎授指導敎授指導敎授指導敎授指導

碩士碩士碩士碩士 學位學位學位學位 論文論文論文論文

Two-differentiated Marking Strategies for TCP

and UDP flows in a Differentiated Services Network

이이이이 論文을論文을論文을論文을 工學工學工學工學 碩士學位碩士學位碩士學位碩士學位 論文으로論文으로論文으로論文으로 提出함提出함提出함提出함

2001年年年年 12月月月月

高麗大學校高麗大學校高麗大學校高麗大學校 大學院大學院大學院大學院

電子電子電子電子工學科工學科工學科工學科

李李李李 晟晟晟晟 赫赫赫赫

李晟赫李晟赫李晟赫李晟赫의의의의 工學工學工學工學 碩士學位碩士學位碩士學位碩士學位 論文論文論文論文 審査를審査를審査를審査를 完了함完了함完了함完了함

2001年年年年 12月月月月

委員長委員長委員長委員長 (印印印印)

委委委委 員員員員 (印印印印)

委委委委 員員員員 (印印印印)

i

ABSTRACT

Two-differentiated Marking Strategies for TCP and UDP flows in a Differentiated

Services Network

Sunghyuck Lee Advised by Prof Chulhee Kang

Dept of Electronics Engineering Graduate School Korea University

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) which are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) and the Assured Forwarding (AF) PHBs

The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

ii

There are two cruxes of issues identified in differentiated services (diffserv) networks One is TCP dynamics over Assured Services and the other is the interaction of TCP and UDP flows for Assured Forwarding Per Hop Behavior (PHB) First TCPrsquos complex response primarily to packet losses in a differentiated Services Network affects the Assured Services TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate for Assured Services When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it may not reach its reservation rate Therefore we argue the use of TCP-friendly building blocks (or modules) and fairness modules in the diffserv architecture regarding this issue and propose Two Markers System (TMS) that is able to properly mark packets and fairly share the bandwidth to each flow for their targeted sending rates TMS has two marking modules that are placed on the source and the edge of a diffserv network For sources of the network the virtual source maker play important roles of reducing TCP impacts in the assured services and Adequate marking packets Next in the edge of the network the edge embedded marker conducts new fairness policy based on the marking rate of flows from sources so called marking rate-based fairness

Second the other identified in the diffserv network is the effect of congestion insensitive flows such as UDP when they share the same AF class with TCP flows TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context Therefore we argue unfair distribution of excess bandwidth in an over-provisioned networks as well as unfair degradation in an under-provisioned network for TCP and UDP flows traffic

Finally we present simulation results to illustrate the effectiveness of TMS scheme over several parameters That is Two Markers System reduces TCP impacts over assured service and fairly shares the bottleneck link bandwidth of a network And then we also present simulation results to illustrate the effectiveness of TMS scheme over TCP and UDP interaction All the flows in Two Markers System somewhat fairly share the excess bandwidth and experience degradation in proportion to their target rates

iii

DEDICATION

To my parents Jongwang Lee and Okhee Kim

iv

ACKNOWLEDGEMENT

I am indebted to many individuals for their care and support given to me during my materrsquos course studies First of all I would like to express my deep gratitude to Professor Chulhee Kang As my thesis advisor he has provided me constant encouragement insightful comments and invaluable suggestions which benefited not only the completion of this thesis but also my career in a long time to come I also would like to thank Professors Jinwoo Park and Dooseop Eom for serving on my committee

I am also grateful to the seniors and the juniors of the Broadband Telecommunication Laboratory especially Seungjoon Seok spent a lot of time with me in the stage of my thesis research His advice and challenges have motivated me to do high quality research I warmly thank my colleagues in ITRC project Seungjin Lee and Sukmin Hong for numerous conversations continuous support and encouragement Special thanks go to Doctor Jihoon Lee who spent a lot of time with me on improving my presentation and technical writing skills and Sungkwan Youm who helped me a great deal from his insights in networking and from numerous discussions with him Thanks also go to my classmates Sungbum Joo and Jaehyung Ryu for their friendship and encouragement I would also like to acknowledge Sungwoo Ryu who made it possible for me to go through this difficult time I would like to recognize the following colleagues Deagun Kim Kyengyong Kang Kangwook Lee Changwoo Lee Chulwoo Kim Joosang Youn Junghyun Kim Hyunsook Park and Eunsun Park for their friendship enjoyable technical discussion and advice I am really pleased to be part of all these talented colleagues I am also thankful to my seniors Soohyun Cho Coben Han Taeho Jung Inyong Ha Dongkil Shin Sukun Yoon Yejin Song Kisun Lee Dongwook Seo Kyungseop Moon and Kiwon Hong for their advice and guidance

This research was supported in part by Information and Telecommunication Research Center (ITRC) contracts I am deeply grateful to Asian Pacific Advanced Network-Korea Consortium (APAN-KR) members for technical discussion on Differentiated Services related issues

v

I cannot find words strong enough to express my gratitude to my parents since they always have loved me believed in me and encouraged me in my study This thesis could not be completed without their presence beside me I am also deeply grateful to my elder sister Jimin Lee her husband Jin Kim and my lovable niece Hyun Kim for their dedicated sacrifice support and encouragement

Especially I indebted to my friend Moonhee Choi who helped me in many ways during the final stage of this study I am also thankful to my friends Taewan Kim Byeongwook Bae Yonghyun Jung Sangmok Han Byeongil Son Hongsik Park Hyoseop Lee Sungchul Choi Sungeun Ku Changjoo Yang Minsang Kim Sukhoon Byeon Dongchul Lee Jaekyun Lee Joonghyo Lee Jepyo Na Dukku Hwang and Jangyeon Kim for their everlasting friendship understanding and encouragement Last but not least I would like to thank to my valuable friend Jieun Ahn for her dedicated sacrifice continuous support understanding and encouragement

vi

TABLE OF CONTENTS

ABSTRACT i DEDICATION iii ACKNOWLEDGEMENT iv

LIST OF FIGURES viii LIST OF TABLES ix

ABBREVIATIONS x

CHAPTER 1 INTRODUCTION 1

CHAPTER 2 BACKGROUND 5

21 TCP and Active Queue Management5

22 Differentiated Services Architecture 10

221 Concepts in diffserv12

222 Diffserv ndashnetwork13

223 Service Realizations 14

23 Related Work24

CHAPTER 3 TWO MARKERS SYSTEM 25

31 The framework of Two Markers System25

32 The crux of fairness strategy 27

33 Adequate-Marking strategy Two Markers System-I algorithm 30

34 Marking rate-based Fairness strategy Two Markers System-II algorithm 34

35 The issue of re-marking 37

36 The issue of interaction between TCP and UDP flows 38

CHAPTER 4 SIMULATION AND ANALYSIS 40

41 TCP dynamics over Assured Services in a diffserv network 40

42 Interaction of TCP and UDP flows 46

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 3: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

李晟赫李晟赫李晟赫李晟赫의의의의 工學工學工學工學 碩士學位碩士學位碩士學位碩士學位 論文論文論文論文 審査를審査를審査를審査를 完了함完了함完了함完了함

2001年年年年 12月月月月

委員長委員長委員長委員長 (印印印印)

委委委委 員員員員 (印印印印)

委委委委 員員員員 (印印印印)

i

ABSTRACT

Two-differentiated Marking Strategies for TCP and UDP flows in a Differentiated

Services Network

Sunghyuck Lee Advised by Prof Chulhee Kang

Dept of Electronics Engineering Graduate School Korea University

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) which are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) and the Assured Forwarding (AF) PHBs

The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

ii

There are two cruxes of issues identified in differentiated services (diffserv) networks One is TCP dynamics over Assured Services and the other is the interaction of TCP and UDP flows for Assured Forwarding Per Hop Behavior (PHB) First TCPrsquos complex response primarily to packet losses in a differentiated Services Network affects the Assured Services TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate for Assured Services When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it may not reach its reservation rate Therefore we argue the use of TCP-friendly building blocks (or modules) and fairness modules in the diffserv architecture regarding this issue and propose Two Markers System (TMS) that is able to properly mark packets and fairly share the bandwidth to each flow for their targeted sending rates TMS has two marking modules that are placed on the source and the edge of a diffserv network For sources of the network the virtual source maker play important roles of reducing TCP impacts in the assured services and Adequate marking packets Next in the edge of the network the edge embedded marker conducts new fairness policy based on the marking rate of flows from sources so called marking rate-based fairness

Second the other identified in the diffserv network is the effect of congestion insensitive flows such as UDP when they share the same AF class with TCP flows TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context Therefore we argue unfair distribution of excess bandwidth in an over-provisioned networks as well as unfair degradation in an under-provisioned network for TCP and UDP flows traffic

Finally we present simulation results to illustrate the effectiveness of TMS scheme over several parameters That is Two Markers System reduces TCP impacts over assured service and fairly shares the bottleneck link bandwidth of a network And then we also present simulation results to illustrate the effectiveness of TMS scheme over TCP and UDP interaction All the flows in Two Markers System somewhat fairly share the excess bandwidth and experience degradation in proportion to their target rates

iii

DEDICATION

To my parents Jongwang Lee and Okhee Kim

iv

ACKNOWLEDGEMENT

I am indebted to many individuals for their care and support given to me during my materrsquos course studies First of all I would like to express my deep gratitude to Professor Chulhee Kang As my thesis advisor he has provided me constant encouragement insightful comments and invaluable suggestions which benefited not only the completion of this thesis but also my career in a long time to come I also would like to thank Professors Jinwoo Park and Dooseop Eom for serving on my committee

I am also grateful to the seniors and the juniors of the Broadband Telecommunication Laboratory especially Seungjoon Seok spent a lot of time with me in the stage of my thesis research His advice and challenges have motivated me to do high quality research I warmly thank my colleagues in ITRC project Seungjin Lee and Sukmin Hong for numerous conversations continuous support and encouragement Special thanks go to Doctor Jihoon Lee who spent a lot of time with me on improving my presentation and technical writing skills and Sungkwan Youm who helped me a great deal from his insights in networking and from numerous discussions with him Thanks also go to my classmates Sungbum Joo and Jaehyung Ryu for their friendship and encouragement I would also like to acknowledge Sungwoo Ryu who made it possible for me to go through this difficult time I would like to recognize the following colleagues Deagun Kim Kyengyong Kang Kangwook Lee Changwoo Lee Chulwoo Kim Joosang Youn Junghyun Kim Hyunsook Park and Eunsun Park for their friendship enjoyable technical discussion and advice I am really pleased to be part of all these talented colleagues I am also thankful to my seniors Soohyun Cho Coben Han Taeho Jung Inyong Ha Dongkil Shin Sukun Yoon Yejin Song Kisun Lee Dongwook Seo Kyungseop Moon and Kiwon Hong for their advice and guidance

This research was supported in part by Information and Telecommunication Research Center (ITRC) contracts I am deeply grateful to Asian Pacific Advanced Network-Korea Consortium (APAN-KR) members for technical discussion on Differentiated Services related issues

v

I cannot find words strong enough to express my gratitude to my parents since they always have loved me believed in me and encouraged me in my study This thesis could not be completed without their presence beside me I am also deeply grateful to my elder sister Jimin Lee her husband Jin Kim and my lovable niece Hyun Kim for their dedicated sacrifice support and encouragement

Especially I indebted to my friend Moonhee Choi who helped me in many ways during the final stage of this study I am also thankful to my friends Taewan Kim Byeongwook Bae Yonghyun Jung Sangmok Han Byeongil Son Hongsik Park Hyoseop Lee Sungchul Choi Sungeun Ku Changjoo Yang Minsang Kim Sukhoon Byeon Dongchul Lee Jaekyun Lee Joonghyo Lee Jepyo Na Dukku Hwang and Jangyeon Kim for their everlasting friendship understanding and encouragement Last but not least I would like to thank to my valuable friend Jieun Ahn for her dedicated sacrifice continuous support understanding and encouragement

vi

TABLE OF CONTENTS

ABSTRACT i DEDICATION iii ACKNOWLEDGEMENT iv

LIST OF FIGURES viii LIST OF TABLES ix

ABBREVIATIONS x

CHAPTER 1 INTRODUCTION 1

CHAPTER 2 BACKGROUND 5

21 TCP and Active Queue Management5

22 Differentiated Services Architecture 10

221 Concepts in diffserv12

222 Diffserv ndashnetwork13

223 Service Realizations 14

23 Related Work24

CHAPTER 3 TWO MARKERS SYSTEM 25

31 The framework of Two Markers System25

32 The crux of fairness strategy 27

33 Adequate-Marking strategy Two Markers System-I algorithm 30

34 Marking rate-based Fairness strategy Two Markers System-II algorithm 34

35 The issue of re-marking 37

36 The issue of interaction between TCP and UDP flows 38

CHAPTER 4 SIMULATION AND ANALYSIS 40

41 TCP dynamics over Assured Services in a diffserv network 40

42 Interaction of TCP and UDP flows 46

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 4: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

i

ABSTRACT

Two-differentiated Marking Strategies for TCP and UDP flows in a Differentiated

Services Network

Sunghyuck Lee Advised by Prof Chulhee Kang

Dept of Electronics Engineering Graduate School Korea University

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) which are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) and the Assured Forwarding (AF) PHBs

The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

ii

There are two cruxes of issues identified in differentiated services (diffserv) networks One is TCP dynamics over Assured Services and the other is the interaction of TCP and UDP flows for Assured Forwarding Per Hop Behavior (PHB) First TCPrsquos complex response primarily to packet losses in a differentiated Services Network affects the Assured Services TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate for Assured Services When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it may not reach its reservation rate Therefore we argue the use of TCP-friendly building blocks (or modules) and fairness modules in the diffserv architecture regarding this issue and propose Two Markers System (TMS) that is able to properly mark packets and fairly share the bandwidth to each flow for their targeted sending rates TMS has two marking modules that are placed on the source and the edge of a diffserv network For sources of the network the virtual source maker play important roles of reducing TCP impacts in the assured services and Adequate marking packets Next in the edge of the network the edge embedded marker conducts new fairness policy based on the marking rate of flows from sources so called marking rate-based fairness

Second the other identified in the diffserv network is the effect of congestion insensitive flows such as UDP when they share the same AF class with TCP flows TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context Therefore we argue unfair distribution of excess bandwidth in an over-provisioned networks as well as unfair degradation in an under-provisioned network for TCP and UDP flows traffic

Finally we present simulation results to illustrate the effectiveness of TMS scheme over several parameters That is Two Markers System reduces TCP impacts over assured service and fairly shares the bottleneck link bandwidth of a network And then we also present simulation results to illustrate the effectiveness of TMS scheme over TCP and UDP interaction All the flows in Two Markers System somewhat fairly share the excess bandwidth and experience degradation in proportion to their target rates

iii

DEDICATION

To my parents Jongwang Lee and Okhee Kim

iv

ACKNOWLEDGEMENT

I am indebted to many individuals for their care and support given to me during my materrsquos course studies First of all I would like to express my deep gratitude to Professor Chulhee Kang As my thesis advisor he has provided me constant encouragement insightful comments and invaluable suggestions which benefited not only the completion of this thesis but also my career in a long time to come I also would like to thank Professors Jinwoo Park and Dooseop Eom for serving on my committee

I am also grateful to the seniors and the juniors of the Broadband Telecommunication Laboratory especially Seungjoon Seok spent a lot of time with me in the stage of my thesis research His advice and challenges have motivated me to do high quality research I warmly thank my colleagues in ITRC project Seungjin Lee and Sukmin Hong for numerous conversations continuous support and encouragement Special thanks go to Doctor Jihoon Lee who spent a lot of time with me on improving my presentation and technical writing skills and Sungkwan Youm who helped me a great deal from his insights in networking and from numerous discussions with him Thanks also go to my classmates Sungbum Joo and Jaehyung Ryu for their friendship and encouragement I would also like to acknowledge Sungwoo Ryu who made it possible for me to go through this difficult time I would like to recognize the following colleagues Deagun Kim Kyengyong Kang Kangwook Lee Changwoo Lee Chulwoo Kim Joosang Youn Junghyun Kim Hyunsook Park and Eunsun Park for their friendship enjoyable technical discussion and advice I am really pleased to be part of all these talented colleagues I am also thankful to my seniors Soohyun Cho Coben Han Taeho Jung Inyong Ha Dongkil Shin Sukun Yoon Yejin Song Kisun Lee Dongwook Seo Kyungseop Moon and Kiwon Hong for their advice and guidance

This research was supported in part by Information and Telecommunication Research Center (ITRC) contracts I am deeply grateful to Asian Pacific Advanced Network-Korea Consortium (APAN-KR) members for technical discussion on Differentiated Services related issues

v

I cannot find words strong enough to express my gratitude to my parents since they always have loved me believed in me and encouraged me in my study This thesis could not be completed without their presence beside me I am also deeply grateful to my elder sister Jimin Lee her husband Jin Kim and my lovable niece Hyun Kim for their dedicated sacrifice support and encouragement

Especially I indebted to my friend Moonhee Choi who helped me in many ways during the final stage of this study I am also thankful to my friends Taewan Kim Byeongwook Bae Yonghyun Jung Sangmok Han Byeongil Son Hongsik Park Hyoseop Lee Sungchul Choi Sungeun Ku Changjoo Yang Minsang Kim Sukhoon Byeon Dongchul Lee Jaekyun Lee Joonghyo Lee Jepyo Na Dukku Hwang and Jangyeon Kim for their everlasting friendship understanding and encouragement Last but not least I would like to thank to my valuable friend Jieun Ahn for her dedicated sacrifice continuous support understanding and encouragement

vi

TABLE OF CONTENTS

ABSTRACT i DEDICATION iii ACKNOWLEDGEMENT iv

LIST OF FIGURES viii LIST OF TABLES ix

ABBREVIATIONS x

CHAPTER 1 INTRODUCTION 1

CHAPTER 2 BACKGROUND 5

21 TCP and Active Queue Management5

22 Differentiated Services Architecture 10

221 Concepts in diffserv12

222 Diffserv ndashnetwork13

223 Service Realizations 14

23 Related Work24

CHAPTER 3 TWO MARKERS SYSTEM 25

31 The framework of Two Markers System25

32 The crux of fairness strategy 27

33 Adequate-Marking strategy Two Markers System-I algorithm 30

34 Marking rate-based Fairness strategy Two Markers System-II algorithm 34

35 The issue of re-marking 37

36 The issue of interaction between TCP and UDP flows 38

CHAPTER 4 SIMULATION AND ANALYSIS 40

41 TCP dynamics over Assured Services in a diffserv network 40

42 Interaction of TCP and UDP flows 46

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 5: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

ii

There are two cruxes of issues identified in differentiated services (diffserv) networks One is TCP dynamics over Assured Services and the other is the interaction of TCP and UDP flows for Assured Forwarding Per Hop Behavior (PHB) First TCPrsquos complex response primarily to packet losses in a differentiated Services Network affects the Assured Services TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate for Assured Services When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it may not reach its reservation rate Therefore we argue the use of TCP-friendly building blocks (or modules) and fairness modules in the diffserv architecture regarding this issue and propose Two Markers System (TMS) that is able to properly mark packets and fairly share the bandwidth to each flow for their targeted sending rates TMS has two marking modules that are placed on the source and the edge of a diffserv network For sources of the network the virtual source maker play important roles of reducing TCP impacts in the assured services and Adequate marking packets Next in the edge of the network the edge embedded marker conducts new fairness policy based on the marking rate of flows from sources so called marking rate-based fairness

Second the other identified in the diffserv network is the effect of congestion insensitive flows such as UDP when they share the same AF class with TCP flows TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context Therefore we argue unfair distribution of excess bandwidth in an over-provisioned networks as well as unfair degradation in an under-provisioned network for TCP and UDP flows traffic

Finally we present simulation results to illustrate the effectiveness of TMS scheme over several parameters That is Two Markers System reduces TCP impacts over assured service and fairly shares the bottleneck link bandwidth of a network And then we also present simulation results to illustrate the effectiveness of TMS scheme over TCP and UDP interaction All the flows in Two Markers System somewhat fairly share the excess bandwidth and experience degradation in proportion to their target rates

iii

DEDICATION

To my parents Jongwang Lee and Okhee Kim

iv

ACKNOWLEDGEMENT

I am indebted to many individuals for their care and support given to me during my materrsquos course studies First of all I would like to express my deep gratitude to Professor Chulhee Kang As my thesis advisor he has provided me constant encouragement insightful comments and invaluable suggestions which benefited not only the completion of this thesis but also my career in a long time to come I also would like to thank Professors Jinwoo Park and Dooseop Eom for serving on my committee

I am also grateful to the seniors and the juniors of the Broadband Telecommunication Laboratory especially Seungjoon Seok spent a lot of time with me in the stage of my thesis research His advice and challenges have motivated me to do high quality research I warmly thank my colleagues in ITRC project Seungjin Lee and Sukmin Hong for numerous conversations continuous support and encouragement Special thanks go to Doctor Jihoon Lee who spent a lot of time with me on improving my presentation and technical writing skills and Sungkwan Youm who helped me a great deal from his insights in networking and from numerous discussions with him Thanks also go to my classmates Sungbum Joo and Jaehyung Ryu for their friendship and encouragement I would also like to acknowledge Sungwoo Ryu who made it possible for me to go through this difficult time I would like to recognize the following colleagues Deagun Kim Kyengyong Kang Kangwook Lee Changwoo Lee Chulwoo Kim Joosang Youn Junghyun Kim Hyunsook Park and Eunsun Park for their friendship enjoyable technical discussion and advice I am really pleased to be part of all these talented colleagues I am also thankful to my seniors Soohyun Cho Coben Han Taeho Jung Inyong Ha Dongkil Shin Sukun Yoon Yejin Song Kisun Lee Dongwook Seo Kyungseop Moon and Kiwon Hong for their advice and guidance

This research was supported in part by Information and Telecommunication Research Center (ITRC) contracts I am deeply grateful to Asian Pacific Advanced Network-Korea Consortium (APAN-KR) members for technical discussion on Differentiated Services related issues

v

I cannot find words strong enough to express my gratitude to my parents since they always have loved me believed in me and encouraged me in my study This thesis could not be completed without their presence beside me I am also deeply grateful to my elder sister Jimin Lee her husband Jin Kim and my lovable niece Hyun Kim for their dedicated sacrifice support and encouragement

Especially I indebted to my friend Moonhee Choi who helped me in many ways during the final stage of this study I am also thankful to my friends Taewan Kim Byeongwook Bae Yonghyun Jung Sangmok Han Byeongil Son Hongsik Park Hyoseop Lee Sungchul Choi Sungeun Ku Changjoo Yang Minsang Kim Sukhoon Byeon Dongchul Lee Jaekyun Lee Joonghyo Lee Jepyo Na Dukku Hwang and Jangyeon Kim for their everlasting friendship understanding and encouragement Last but not least I would like to thank to my valuable friend Jieun Ahn for her dedicated sacrifice continuous support understanding and encouragement

vi

TABLE OF CONTENTS

ABSTRACT i DEDICATION iii ACKNOWLEDGEMENT iv

LIST OF FIGURES viii LIST OF TABLES ix

ABBREVIATIONS x

CHAPTER 1 INTRODUCTION 1

CHAPTER 2 BACKGROUND 5

21 TCP and Active Queue Management5

22 Differentiated Services Architecture 10

221 Concepts in diffserv12

222 Diffserv ndashnetwork13

223 Service Realizations 14

23 Related Work24

CHAPTER 3 TWO MARKERS SYSTEM 25

31 The framework of Two Markers System25

32 The crux of fairness strategy 27

33 Adequate-Marking strategy Two Markers System-I algorithm 30

34 Marking rate-based Fairness strategy Two Markers System-II algorithm 34

35 The issue of re-marking 37

36 The issue of interaction between TCP and UDP flows 38

CHAPTER 4 SIMULATION AND ANALYSIS 40

41 TCP dynamics over Assured Services in a diffserv network 40

42 Interaction of TCP and UDP flows 46

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 6: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

iii

DEDICATION

To my parents Jongwang Lee and Okhee Kim

iv

ACKNOWLEDGEMENT

I am indebted to many individuals for their care and support given to me during my materrsquos course studies First of all I would like to express my deep gratitude to Professor Chulhee Kang As my thesis advisor he has provided me constant encouragement insightful comments and invaluable suggestions which benefited not only the completion of this thesis but also my career in a long time to come I also would like to thank Professors Jinwoo Park and Dooseop Eom for serving on my committee

I am also grateful to the seniors and the juniors of the Broadband Telecommunication Laboratory especially Seungjoon Seok spent a lot of time with me in the stage of my thesis research His advice and challenges have motivated me to do high quality research I warmly thank my colleagues in ITRC project Seungjin Lee and Sukmin Hong for numerous conversations continuous support and encouragement Special thanks go to Doctor Jihoon Lee who spent a lot of time with me on improving my presentation and technical writing skills and Sungkwan Youm who helped me a great deal from his insights in networking and from numerous discussions with him Thanks also go to my classmates Sungbum Joo and Jaehyung Ryu for their friendship and encouragement I would also like to acknowledge Sungwoo Ryu who made it possible for me to go through this difficult time I would like to recognize the following colleagues Deagun Kim Kyengyong Kang Kangwook Lee Changwoo Lee Chulwoo Kim Joosang Youn Junghyun Kim Hyunsook Park and Eunsun Park for their friendship enjoyable technical discussion and advice I am really pleased to be part of all these talented colleagues I am also thankful to my seniors Soohyun Cho Coben Han Taeho Jung Inyong Ha Dongkil Shin Sukun Yoon Yejin Song Kisun Lee Dongwook Seo Kyungseop Moon and Kiwon Hong for their advice and guidance

This research was supported in part by Information and Telecommunication Research Center (ITRC) contracts I am deeply grateful to Asian Pacific Advanced Network-Korea Consortium (APAN-KR) members for technical discussion on Differentiated Services related issues

v

I cannot find words strong enough to express my gratitude to my parents since they always have loved me believed in me and encouraged me in my study This thesis could not be completed without their presence beside me I am also deeply grateful to my elder sister Jimin Lee her husband Jin Kim and my lovable niece Hyun Kim for their dedicated sacrifice support and encouragement

Especially I indebted to my friend Moonhee Choi who helped me in many ways during the final stage of this study I am also thankful to my friends Taewan Kim Byeongwook Bae Yonghyun Jung Sangmok Han Byeongil Son Hongsik Park Hyoseop Lee Sungchul Choi Sungeun Ku Changjoo Yang Minsang Kim Sukhoon Byeon Dongchul Lee Jaekyun Lee Joonghyo Lee Jepyo Na Dukku Hwang and Jangyeon Kim for their everlasting friendship understanding and encouragement Last but not least I would like to thank to my valuable friend Jieun Ahn for her dedicated sacrifice continuous support understanding and encouragement

vi

TABLE OF CONTENTS

ABSTRACT i DEDICATION iii ACKNOWLEDGEMENT iv

LIST OF FIGURES viii LIST OF TABLES ix

ABBREVIATIONS x

CHAPTER 1 INTRODUCTION 1

CHAPTER 2 BACKGROUND 5

21 TCP and Active Queue Management5

22 Differentiated Services Architecture 10

221 Concepts in diffserv12

222 Diffserv ndashnetwork13

223 Service Realizations 14

23 Related Work24

CHAPTER 3 TWO MARKERS SYSTEM 25

31 The framework of Two Markers System25

32 The crux of fairness strategy 27

33 Adequate-Marking strategy Two Markers System-I algorithm 30

34 Marking rate-based Fairness strategy Two Markers System-II algorithm 34

35 The issue of re-marking 37

36 The issue of interaction between TCP and UDP flows 38

CHAPTER 4 SIMULATION AND ANALYSIS 40

41 TCP dynamics over Assured Services in a diffserv network 40

42 Interaction of TCP and UDP flows 46

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 7: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

iv

ACKNOWLEDGEMENT

I am indebted to many individuals for their care and support given to me during my materrsquos course studies First of all I would like to express my deep gratitude to Professor Chulhee Kang As my thesis advisor he has provided me constant encouragement insightful comments and invaluable suggestions which benefited not only the completion of this thesis but also my career in a long time to come I also would like to thank Professors Jinwoo Park and Dooseop Eom for serving on my committee

I am also grateful to the seniors and the juniors of the Broadband Telecommunication Laboratory especially Seungjoon Seok spent a lot of time with me in the stage of my thesis research His advice and challenges have motivated me to do high quality research I warmly thank my colleagues in ITRC project Seungjin Lee and Sukmin Hong for numerous conversations continuous support and encouragement Special thanks go to Doctor Jihoon Lee who spent a lot of time with me on improving my presentation and technical writing skills and Sungkwan Youm who helped me a great deal from his insights in networking and from numerous discussions with him Thanks also go to my classmates Sungbum Joo and Jaehyung Ryu for their friendship and encouragement I would also like to acknowledge Sungwoo Ryu who made it possible for me to go through this difficult time I would like to recognize the following colleagues Deagun Kim Kyengyong Kang Kangwook Lee Changwoo Lee Chulwoo Kim Joosang Youn Junghyun Kim Hyunsook Park and Eunsun Park for their friendship enjoyable technical discussion and advice I am really pleased to be part of all these talented colleagues I am also thankful to my seniors Soohyun Cho Coben Han Taeho Jung Inyong Ha Dongkil Shin Sukun Yoon Yejin Song Kisun Lee Dongwook Seo Kyungseop Moon and Kiwon Hong for their advice and guidance

This research was supported in part by Information and Telecommunication Research Center (ITRC) contracts I am deeply grateful to Asian Pacific Advanced Network-Korea Consortium (APAN-KR) members for technical discussion on Differentiated Services related issues

v

I cannot find words strong enough to express my gratitude to my parents since they always have loved me believed in me and encouraged me in my study This thesis could not be completed without their presence beside me I am also deeply grateful to my elder sister Jimin Lee her husband Jin Kim and my lovable niece Hyun Kim for their dedicated sacrifice support and encouragement

Especially I indebted to my friend Moonhee Choi who helped me in many ways during the final stage of this study I am also thankful to my friends Taewan Kim Byeongwook Bae Yonghyun Jung Sangmok Han Byeongil Son Hongsik Park Hyoseop Lee Sungchul Choi Sungeun Ku Changjoo Yang Minsang Kim Sukhoon Byeon Dongchul Lee Jaekyun Lee Joonghyo Lee Jepyo Na Dukku Hwang and Jangyeon Kim for their everlasting friendship understanding and encouragement Last but not least I would like to thank to my valuable friend Jieun Ahn for her dedicated sacrifice continuous support understanding and encouragement

vi

TABLE OF CONTENTS

ABSTRACT i DEDICATION iii ACKNOWLEDGEMENT iv

LIST OF FIGURES viii LIST OF TABLES ix

ABBREVIATIONS x

CHAPTER 1 INTRODUCTION 1

CHAPTER 2 BACKGROUND 5

21 TCP and Active Queue Management5

22 Differentiated Services Architecture 10

221 Concepts in diffserv12

222 Diffserv ndashnetwork13

223 Service Realizations 14

23 Related Work24

CHAPTER 3 TWO MARKERS SYSTEM 25

31 The framework of Two Markers System25

32 The crux of fairness strategy 27

33 Adequate-Marking strategy Two Markers System-I algorithm 30

34 Marking rate-based Fairness strategy Two Markers System-II algorithm 34

35 The issue of re-marking 37

36 The issue of interaction between TCP and UDP flows 38

CHAPTER 4 SIMULATION AND ANALYSIS 40

41 TCP dynamics over Assured Services in a diffserv network 40

42 Interaction of TCP and UDP flows 46

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 8: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

v

I cannot find words strong enough to express my gratitude to my parents since they always have loved me believed in me and encouraged me in my study This thesis could not be completed without their presence beside me I am also deeply grateful to my elder sister Jimin Lee her husband Jin Kim and my lovable niece Hyun Kim for their dedicated sacrifice support and encouragement

Especially I indebted to my friend Moonhee Choi who helped me in many ways during the final stage of this study I am also thankful to my friends Taewan Kim Byeongwook Bae Yonghyun Jung Sangmok Han Byeongil Son Hongsik Park Hyoseop Lee Sungchul Choi Sungeun Ku Changjoo Yang Minsang Kim Sukhoon Byeon Dongchul Lee Jaekyun Lee Joonghyo Lee Jepyo Na Dukku Hwang and Jangyeon Kim for their everlasting friendship understanding and encouragement Last but not least I would like to thank to my valuable friend Jieun Ahn for her dedicated sacrifice continuous support understanding and encouragement

vi

TABLE OF CONTENTS

ABSTRACT i DEDICATION iii ACKNOWLEDGEMENT iv

LIST OF FIGURES viii LIST OF TABLES ix

ABBREVIATIONS x

CHAPTER 1 INTRODUCTION 1

CHAPTER 2 BACKGROUND 5

21 TCP and Active Queue Management5

22 Differentiated Services Architecture 10

221 Concepts in diffserv12

222 Diffserv ndashnetwork13

223 Service Realizations 14

23 Related Work24

CHAPTER 3 TWO MARKERS SYSTEM 25

31 The framework of Two Markers System25

32 The crux of fairness strategy 27

33 Adequate-Marking strategy Two Markers System-I algorithm 30

34 Marking rate-based Fairness strategy Two Markers System-II algorithm 34

35 The issue of re-marking 37

36 The issue of interaction between TCP and UDP flows 38

CHAPTER 4 SIMULATION AND ANALYSIS 40

41 TCP dynamics over Assured Services in a diffserv network 40

42 Interaction of TCP and UDP flows 46

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 9: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

vi

TABLE OF CONTENTS

ABSTRACT i DEDICATION iii ACKNOWLEDGEMENT iv

LIST OF FIGURES viii LIST OF TABLES ix

ABBREVIATIONS x

CHAPTER 1 INTRODUCTION 1

CHAPTER 2 BACKGROUND 5

21 TCP and Active Queue Management5

22 Differentiated Services Architecture 10

221 Concepts in diffserv12

222 Diffserv ndashnetwork13

223 Service Realizations 14

23 Related Work24

CHAPTER 3 TWO MARKERS SYSTEM 25

31 The framework of Two Markers System25

32 The crux of fairness strategy 27

33 Adequate-Marking strategy Two Markers System-I algorithm 30

34 Marking rate-based Fairness strategy Two Markers System-II algorithm 34

35 The issue of re-marking 37

36 The issue of interaction between TCP and UDP flows 38

CHAPTER 4 SIMULATION AND ANALYSIS 40

41 TCP dynamics over Assured Services in a diffserv network 40

42 Interaction of TCP and UDP flows 46

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 10: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

vii

CHAPTER 5 CONCLUSION 51

REFERENCE 53

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 11: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

viii

LIST OF FIGURES

Fig 1 Throughput of Single TCP Transfer6

Fig 2 TCPIP Headers8

Fig 3 The markingdropping behavior of RED 10

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing 11

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network13

Fig 6 Type of Service field in IPv415

Fig 7 A traffic conditioner and its functional components 17

Fig 8 Usage of DS-field in Class Selector PHB 20

Fig 9 The ldquoTwo Markers Systemrdquo framework 26

Fig 10 The design of the Two Markers System architecture 27

Fig 11 Program code 1 TMS_I algorithm (cwnd open) 31

Fig 12 Program code 2 TMS_I algorithm (cwnd close)33

Fig 13 Program code 3 TMS_II algorithm36

Fig 14 Simulation topology I using the Two Markers System 40

Fig 15 Adequate-marking algorithm 42

Fig 16 Marking Packet through scaling factor 44

Fig 17 Throughput for marking rate-based fairness strategy45

Fig 18 Simulation topology II using the Two Markers System47

Fig 19 Sharing excess bandwidth according to the link utilization 48

Fig 20 Throughput according to network conditions 50

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 12: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

ix

LIST OF TABLES

Table 1 The four AF classes and the related three drop precedence values 21

Table 2 TCA specification for BBE service 23

Table 3 TCA specifications for Leased Line service23

Table 4 RED parameter settings in the TMS41

Table 5 The average throughput of aggregate flow as increment of RTT 46

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 13: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

x

ABBREVIATIONS

AF Assured Forwarding a PHB group defined in [4] ATM Asynchronous Transfer Mode BA Behaviour Aggregate separate traffic flows receiving same treatment in a router

DS diffserv Differentiated Services DSCP Differentiated Services Codepoint value of a field in IP-packets in DS-

capable domains based on which the forwarding behaviour is applied

EF Expedited Forwarding a PHB defined in [20] IETF Internet Engineering Task Force IP Internet Protocol IPv4 Internet Protocol version 4 the current widely used IP version IPv6 Internet Protocol version 6 the developed future version of IP ISP Internet Service Provider MF Multi-Field MPLS Multiprotocol Label Switching NE Network Element eg a router PHB Per-Hop-Behaviour forwarding behaviour applied to each IP-packet in

routers in DS-capable domains

QoS Quality of Service RFC Request For Comments RSVP Resource Reservation Protocol

SLA Service Level Agreement an agreement of a service between a provider and a customer

SNMP Simple Network Management Protocol TC Traffic Conditioner TCA Traffic Conditioning Agreement a part of SLA that describes in detail

the agreed service from the technical point of view

TCP Transmission Control Protocol

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 14: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

xi

TOS Type of service UDP User Datagram Protocol VPN Virtual Private Network

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 15: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

1

CHAPTER 1 INTRODUCTION

The current Internet is based on the best-effort service model In this model the service provider allocates bandwidth among all of the instantaneous customers as best it can that is all user packets compete equally for network resources The service provider also attempts to serve all of them without making any explicit commitment as to rate or any other service quality The best-effort model has been successful till now because the majority of the traffic on the Internet is TCP-based The TCP end-to-end congestion control mechanism forces traffic sources to back off whenever congestion is detected in the network [8] However such dependence on the end systemsrsquo interaction is increasingly becoming unrealistic Given the current best-effort model with FIFO queuing inside the network it is relatively easy for non-adaptive sources to gain greater shares of network bandwidth and thereby starve other well-behaved TCP sources A greedy source for example may simply continue to send at the same rate when faced with congestion while other TCP sources back off The best-effort model is also incongruent for applications such as real time audio and video that require explicit bandwidth and bounded delay guarantees Moreover the best-effort model treats all packets equally once they have been injected into the network Thus it is difficult for Internet Service Providers (ISPs) to provide services that are commensurate with the expectations of consumers who are willing to pay more for a better class of services

Over the last decade years a number of efforts have been made to provide Quality-of-Services in the Internet Much of the work has focused on the end-to-end per-session reservation approach where an application makes an end-to-end reservation before starting a session While it makes sense to set up a reservation for a long-lasting session this approach is not suitable for highly transaction-oriented applications such as browsing the World Wide Web (WWW) The overheads and latency involved in setting up a reservation for each session become prohibitively high There are also concerns regarding the scalability providing fine-grained guarantees at the per-session level in the core of the Internet

The movement providing Quality-of-Service (QoS) assurance in the Internet has encouraged the research in differentiated services (diffserv or DS) while the number of

The journal model is Lecture Notes in Computer Science

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 16: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

2

users keeps growing Diffserv is a suitable solution to Quality of Service provisioning because it scales well with increasing number of network users and it does not alter the current Internet paradigm much [11] In contrast to the per-session reservation approach the differentiated services scheme allows services providers to offer different levels of services to a few classes of aggregated traffic flows An ISP for example may offer two levels of services a premium service for some customers who are willing to pay more and best-effort service at a lower price The differentiated services approach attempts to allocate bandwidth for aggregated flows without per-session signaling or maintaining per-session state in the network core and to offer better services for users willing to pay more with long-term service contracts

The diffserv approach is based on a set of simple mechanisms that treat packets differently according to the marking of the DS field in the IP header Before entering in a DS domain the field is marked with a certain value (or codepoint) that determines the treatment that should be supplied to the packet inside the domain However because of the limited amount of bits available for use in the DS field the IETFrsquos diffserv Working Group has defined a small set of building blocks called per-hop behaviors (PHBs) that are used by routers to deliver a number of services Among the initial PHBs being standardized are the Expedited Forwarding (EF) [20] and the Assured Forwarding (AF) PHBs [4] The EF PHB specifies a forwarding behavior in which packets see a very small amount of loss and a very low queuing delay In order to ensure every packet marked with EF receives this service EF requires every router to allocate enough forwarding resources so that the rate of incoming EF packets is always less than or equal to the rate which the router can forward them The AF PHB group on the other hand specifies a forwarding behavior in which packets see a very small amount of loss and consists of four independently forwarded classes that have two or three-drop preference levels The idea behind AF is to preferentially drop best-effort packets and packets which are outside of their contract when congestion occurs

In this thesis we consider a form of a better-than-best-effort service called the ldquoAssured Servicerdquo The Assured Service follows expected capacity profiles that are statistically provisioned Packets are treated preferentially according to the dropping probability applied to the best-effort queue The assurance of service comes from the expectation that the traffic is unlikely to be dropped as long as it stays within the

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 17: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

3

negotiated capacity profile The building blocks of this service include a traffic marker at the edge of the domain and a differentiated dropping algorithm in the core of the network A packet of a flow is marked IN (in profile) if the temporal sending rate of the arrival time of the packet is within the contract profile of the flow Otherwise the packets are marked OUT (out-of-profile) The temporal sending rate of a flow is measured using TSM (Time Sliding Window) or token bucket control module A differentiated dropping algorithm such as RIO (Random Early Detection with INOUT) is provided in the core routers of the network In particular the OUT packets are preferentially dropped upon evidence of congestion at the bottleneck before the IN packets After dropping all incoming OUT packets IN packets are discarded With this dropping policy the RIO network gives preference to IN packets and provides different levels of service to users based on their service contracts

In [11] authors presented that the use of a simple token bucket marker for the above assured service results in TCP realising the minimum assured rate They attributed the cause of such behaviour to TCPrsquos complex response primarily to packet losses TCP reacts to congestion by halving the congestion window (cwnd) and increases the window additively when packets are delivered successfully Exponential decrease (halving the congestion window) is required to avoid congestion collapse and TCP treats a packet drop as an indication of congestion [7] However in the diffserv network these additive-increase and multiplicative-decrease make it hard to protect the reservation rate When TCP reacts to an OUT packet drop by halving its congestion window and increases additively it in the end may not reach its reservation rate Therefore the overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees It is called TCP dynamics over Assured Service

In this thesis we take into consideration the problem of the TCP dynamics and the differentiated drop policies of the network in realizing the reserved throughputs and propose newly modified scheme called Two Markers System for improving the realisation of target rates in a diffserv network In addition the issue of fairness among users is important in this diffserv context since we expect that the differentiation in service will be based on some kind of pricing mechanism thus it is necessary that the

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 18: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

4

resource allocation be commensurate with how the service is priced In this thesis we also argue how fairly the proposed scheme allocates bandwidth to flows

One of the cruxes identified in the diffserv network is the effect of congestion-insensitive flows such as UDP when they share the same AF class with TCP flows [9 11 14] TCP and UDP interaction for the AF PHB have become the important issue in the fairness of diffserv context because TCP flows are penalised more than UDP flows with respect to service degradation and fail to accomplish its target rate Therefore we deliberate the issue of interaction between TCP and UDP flows as regulating their drop precedence

The rest of this thesis is organized as follows Chapter 2 reviews the state of the art in this area Chapter 3 proposes Two Markers System to reduce TCP influences over Assured Services and consider the issue of fairness between TCP and UDP for a bottleneck link of a diffserv network and then explores the proposed algorithms Chapter 4 presents results of the above algorithms in simulated environments for these traffics to coexist and performs analysis for simulated results Chapter 5 concludes our work

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 19: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

5

CHAPTER 2 BACKGROUND

21 TCP and Active Queue Management It is important to avoid high packet loss rates in the Internet When a packet is dropped before it reaches its destination all of the resources it has consumed in transit are wasted In extreme cases this situation can lead to congestion collapse [7] Loss rates are especially high during times of heavy congestion when a large number of connections compete for scarce network bandwidth With the explosion of the WWW recent measurements have shown that the growing demand for network bandwidth has driven loss rates up across a number of congested links in the Internet [22]

Over the last decade TCP congestion control has been used to effectively regulate the rates of individual connections sharing network links TCP congestion control is window-based The sender keeps a congestion window (cwnd) whose size limits the number of unacknowledged packets the sender can have outstanding in the network Upon receiving acknowledgments for successfully transmitted data the sender increases its transmission rate by incrementing the size of its congestion window At some point in time the rate at which TCP sends its packets eventually exceeds the networkrsquos capacity to deliver them When this happens queues build up in the network routers and overflow causing packets to be dropped TCP assumes that all packet loss is due to congestion and reduces its congestion window upon detecting a loss TCPrsquos congestion control algorithm is fairly straightforward When a connection starts up it attempts to ramp up its sending rate quickly by exponentially increasing its congestion window until it reaches an implementation-specific value (ssthresh) This stage is called slow-start and allows the source to double its congestion window and thus its sending rate every round-trip time In order to prevent excessive losses due to an exponentially increasing sending rate TCP senders typically employ what is known as the congestion-avoidance algorithm [7] a modification to TCP first deployed in Reno variants of TCP In this algorithm TCP uses the ssthresh value to approximate the window size that the network can support When the window size exceeds this threshold TCP enters the congestion avoidance phase In this phase the window is increased at a much slower rate of one segment per round-trip time When the offered load increases above network capacity packets are eventually

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 20: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

6

dropped One way in which TCP detects a packet loss is through the receipt of a number of duplicate cumulative acknowledgments from the receiver [21] Upon receiving a given number of duplicate acknowledgments TCP infers that a packet loss has occurred and immediately reduces its sending rate in half by halving its congestion window and sets ssthresh to the new value of the congestion window These mechanisms are called fast retransmit and fast recovery

When congestion is severe enough such that packet loss cannot be inferred in such a manner TCP relies on a separate retransmission timeout mechanism to trigger subsequent retransmissions of lost packets When a retransmission timeout occurs TCP reduces its window size to one segment and retransmits the lost segment To prevent continual retransmissions in times of severe congestion and network outages TCP employs an exponential back-off algorithm In particular if the sender continually sends the same segment but receives no acknowledgments for it TCP doubles its re-transmission timeout interval Upon receipt of an acknowledgment for subsequent new segment TCP resets the timeout interval and resumes its normal sending

Fig 1 Throughput of Single TCP Transfer

Figure1 shows a graphical picture of how TCP slow-start and congestion avoidance work As the figure shows TCP initially starts with a congestion window of 1 The window is then doubled every round-trip time When the congestion window reaches ssthresh TCP slows its rate of increase Eventually when the transmission rate of the connection overwhelms the bottleneck link packets are dropped This loss is detected by

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 21: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

7

TCP that then reacts by halving the congestion window As the figure shows upon recovering from congestion the TCP sender enters the congestion avoidance phase in which the window is increased linearly at a rate of one segment per round trip time In steady state TCP then oscillates between a window of w and w2 where w depends on the capacity of the network and the number of connections currently active over the bottleneck link

Given the importance of TCP and its congestion control mechanisms to the health of the Internet there have been a number of proposed modifications to its algorithms One modification that has been proposed is selective acknowledgments (SACK) [23] SACK augments TCPrsquos cumulative acknowledgment mechanism with additional information that allows the receiver to inform the sender that segments it is missing By specifying this information the TCP sender can make more intelligent decisions in determining when packets have been lost and in identifying which segments should be retransmitted This helps TCP detect congestive loss more quickly and eliminates un-necessary retransmissions by TCP senders Another set of proposed TCP modifications focuses on congestion recovery TCP is ACK-clocked often sending only after it has received acknowledgments for previously transmitted packets When there are insufficient packets or acknowledgments in flight to trigger TCP sends a retransmission timeout must occur before the TCP source can resume sending Because the Reno variant of TCP freezes its window while recovering from congestion it often induces a subsequent retransmission timeout since the source does not send packets upon receiving acknowledgments in the recovery phase To address this problem a simple observation is made When a TCP sender receives any type of acknowledgment it is a signal that a packet has left the network and should thus allow the TCP sender to inject an additional packet without causing further congestion This modification allows TCP to maintain its ACK-clocking and prevents unnecessary retransmission timeouts New Reno [24] modifications use this observation to improve TCP performance Finally more radical changes to TCPrsquos congestion control algorithms have been proposed In current incarnations of TCP the congestion window follows a sawtooth-like pattern where the congestion window is continually increased until packet loss occurs While this allows TCP to probe for additional bandwidth such behavior eventually induces packet loss The idea behind the Vegas [25] modifications is to change the congestion avoidance phase so that it only

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 22: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

8

performs its linear increase when the network is not congested In both algorithms if the round-trip times indicate an increase in delay due to queues being built up in the network the TCP source either decreases or fixes the size of the congestion window rather than increasing it While these mechanisms have the potential for improving loss rates in the Internet it is unclear how well each scheme performs when congestion is persistent In addition by modifying the linear-increasemultiplicative-decrease algorithm of TCP these modifications cannot ensure that max-min fair sharing occurs between connections that are multiplexed across the link

Fig 2 TCPIP Headers

With the exception of Vegas one of the problems with the TCP congestion control algorithm over current networks is that the sending sources reduce their transmission rates only after detecting packet loss due to queue overflow This is a problem since a considerable amount of time may pass between when the packet is dropped at the router and when the source actually detects the loss In the meantime a large number of packets may be dropped as sources continue to transmit at a rate that the network cannot support Because of this the IETF is advocating the use of explicit congestion notification (ECN) [26] and active queue management as a means to prevent packet loss The idea behind ECN is to decouple packet loss from congestion notification In this proposal ECN is

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 23: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

9

implemented using two bits of the type-of-serviceDS field of the IP header and two bits of the currently reserved flags field of the TCP header as shown in Figure 2 When a network router experiences congestion it can explicitly signal the sources instead of dropping their packets In order to do so the router first examines the ECN-capable Transport bit (ECT) to see if the flow is ECN-capable If it is not ECN-capable the packet is simply dropped If the flow is ECN-capable the congestion experienced bit (CE) is set and used as a signal to the TCP receiver that congestion has occurred The TCP receiver upon receiving this signal modifies the TCP header of the return acknowledgment using a currently unused bit in the TCP flags field As Figure 2 shows a bit labeled ldquoECN-echordquo is used by the TCP receiver to indicate the presence of congestion to the sender Upon receipt of a TCP segment with the ECN-echo bit set the TCP sender invokes congestion control mechanisms as if it had detected a packet loss In addition it sets the ldquoCongestion Window Reducedrdquo (CWR) bit of its next packet to the receiver in order to signal the receiver that it has in fact reduced its sending rate

In conjunction with ECN the IETF is also advocating the use of active queue management The idea behind active queue management is to detect incipient congestion early and to convey congestion notification to the end hosts in order to allow them to reduce their transmission rates before queue overflow and packet loss occurs One form of active queue management being proposed by the IETF for deployment in the network is RED (Random Early Detection) [27] RED maintains an exponentially weighted moving average (EWMA) of the queue length which it uses to detect congestion RED detects increases in the average queue length and uses it to determine whether or not to drop or ECN-mark a packet More specifically Figure 3 plots the markingdropping probability of RED as a function of the average queue length As the figure shows when the average queue length exceeds a minimum threshold (minth) packets are randomly dropped or marked with a given probability A connection receiving congestion notification in the form of an ECN mark cuts its congestion window in half as it would if it had detected a packet loss The probability that a packet arriving at the RED queue is either dropped or marked depends on among other things the average queue length and an initial probability parameter (maxp) As Figure 3 shows the calculated markingdropping probability is a linear function of the average queue length The probability is 0 when the average queue length is less than or equal to minth and linearly

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 24: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

10

increases to maxp when the average queue length approaches a maximum threshold (maxth) When the average queue length exceeds maxth all packets are dropped or marked

Fig 3 The markingdropping behavior of RED

With the deployment of ECN and RED it is the hope of the IETF that packet loss rates in the Internet can be controlled Unfortunately there are significant weaknesses in both TCP congestion control and in RED queue management that prevent packet loss from being eliminated Given these weaknesses a number of congestion control and queue management algorithms that effectively prevent packet loss are proposed and evaluated

22 Differentiated Services Architecture In Differentiated Services classification of traffic is achieved by assigning forwarding behaviour to aggregates instead of microflows The DS fields in the headers of IP-datagrams in a traffic flow are marked with a DS codepoint value either by the sending host or a traffic conditioner (usually in a boundary node) according to the behaviour wished for The forwarding behaviours are applied hop-by-hop based on DS codepoints

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 25: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

11

(DSCPs) and on per-packet basis to aggregates of traffic streams in DS-capable networkrsquos nodes Ingress traffic is classified and conditioned at boundary nodes in order to make it comply with the service agreement made between a customer and a service provider

Taking diffserv into use does not necessarily require changes in applications Separate traffic streams from a single host are either identified by a node in the network (how this is done is outside the scope of diffserv) or are marked with a DSCP by a traffic conditioner that specifies a service level the host is justified for In addition to the well-known best-effort (default) service a variety of services can be applied a leased-line emulation for example Different types of service will not be standardized within diffserv but rather the implementation behind them Applications or operating systems may also request a forwarding behaviour for packets by marking the DS field

Fig 4 Mapping of DSCP-marked streams to PHBs and aggregate routing

The cornerstones of composing services in diffserv are the Per-Hop-Behaviours (PHBs) PHBs present externally observable-forwarding behaviours applied at DS-compliant nodes to DS behaviour aggregates ie to packets with the same DSCP crossing a link in a particular direction DS-capable domains operate with a common set of PHB

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 26: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

12

definitions The PHBs intended for wide usage are to be standardized and few have already been proposed Fig 4 illustrates how microflows are aggregated into PHBs and how the PHBs can be handled in routers

A customer is entitled to services for which he has made an agreement with a service provider Formed Service Level Agreements (SLAs) specify the forwarding service a customerrsquos traffic should receive A customer in this context refers to eg an organization or a DS-capable upstream domain A sub-set of SLA is Traffic Conditioning Agreement (TCA) which specifies in detail how customerrsquos traffic is policed in traffic conditioners to comply with the SLA TCA contains traffic classifier rules and any corresponding traffic profiles and metering marking discarding andor shaping rules which are applied to traffic streams selected by a traffic classifier

In short diffserv is a scalable architecture in which per-microflow or per-customer state information is not carried along IP-datagramsrsquo paths

221 Concepts in diffserv Traffic conditioning in DS domains is carried out in boundary nodes ie nodes that connect two domains By conditioning only in boundary nodes ingress and egress traffic is shaped to comply with free resources in the target network and scalability is obtained By traffic conditioner is meant an entity that may contain meters markers droppers and shapers These devices meter the ingress traffic and based on the result possibly re-mark packets with a new DSCP drop excessive packets or delay packets in order to bring it into compliance with a traffic profile

Forwarding behaviours (ie services) in diffserv are determined by the codepoint value in the DS field in the IPv4 or IPv6 headers The DS field in the IPv4 header is the TOS octet and in the IPv6 header the Traffic Class octet Each DSCP maps to at least one PHB PHBs are defined to permit a reasonably granular means of allocating buffer and bandwidth resources at each node among competing traffic streams The PHBs donrsquot solely define services but they are the building blocks

A domain which implements diffserv as defined in [18] is called DSndashcapable A DS-capable domain has DS ingress and DS egress nodes The former ones handle traffic entering the domain the latter ones handle traffic leaving the domain In practice the

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 27: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

13

division is only logical DS ingress and egress nodes are boundary nodes that connect the domain to another DS domain or to a domain that is not DS-capable

222 Diffservndashnetwork

In the following is discussed how functions a network that supports diffserv

Fig 5 shows three domains which are interconnected with a backbone network Two of the domains are DS-capable ie the interior and boundary nodes support forwarding based on DSCP values Introducing a non-DS-compliant node in a DS-capable domain may result in service degradation and the required service characteristics (eg low delay) may become unreachable Non-DS-capable domains do not employ differentiated services but rather the best-effort service and possibly other service provisioning approaches

Fig 5 A non-DS-capable and two DS-capable domains connected to backbone network

Nodes in a DS-domain eg domain 1 in the Fig 5 are supposed to employ uniform

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 28: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

14

sets of PHBs on which the services can be built It is to be noted that the PHBs need not be the same among separate domains Instead mappings of PHBs are agreed between parties so that traffic from a domain that has certain PHB characteristics is re-marked to an equivalent PHB in another DS-domain Bilateral human agreement is needed at least in the early employment of diffserv As some host say H5 in DS-domain 1 in the Fig 5 above starts generating traffic there are certain implications in the functioning of the network that are different compared to if the network was non-DS-capable

First the near-by leaf router receives packets from H5 The router checks the DSCP value from the headers of the received packets and reads a PHB corresponding to the value The router verifies that the host is entitled to the PHB and that the traffic is within the limits of the PHB The router may re-mark the DSCP if it has been configured to do so for all traffic from H5 or it may re-mark packets in order to force the traffic stream to a justified level for H5 The destination of the traffic from H5 is say host H2 in DS-domain 2 Based on its routing table the leaf router forwards the traffic to domainrsquos sole boundary router The boundary routerrsquos traffic conditioner meters the traffic and possibly shapes it into compliance with the agreement it has made with a boundary router in the backbone network The presented backbone DS-router always performs policing to incoming traffic as it canrsquot trust upstream routers to have already done so When packets are pre-marked and conditioned in the upstream domain potentially fewer classification and traffic conditioning rules need to be supported in the downstream

The backbone router forwards the traffic to the boundary router of DS-domains 2 which in turn polices the traffic to comply with the service agreement with the backbone network Next the traffic is forwarded to the leaf router close to H2 and finally to H2 How successfully requested QoS is achieved depends greatly on how well the mappings of PHBs are performed between inter-domain boundaries and how similar the traffic conditioning rules are When functioning correctly the usage of diffserv should result in sought-after provisioning of networkrsquos resources for each of the PHBs

223 Service Realizations Services are based on demands from the customersrsquo side they are feasible from the business point of view and they are technically realizable When speaking of the future of

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 29: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

15

data communication it is essential that new services can be easily presented and that the already implemented ones are scalable Differentiated Services tries to make all this possible The diffserv architecture can be thought of to be a framework within which service providers can offer their customers a range of network services that are differentiated in terms of performance and pricing

Service realizations in diffserv can be thought of to consist of three levels On the first level are the DSCP values on the second the PHBs specified by the DSCP values and on the third level are profiles Customer and provider typically negotiate a profile at each service level for describing the rate at which traffic can be submitted (policing profile) Packets submitted in excess of this profile may not be allotted the service level requested The value to the customer comes from the services that are obtained from the usage of these three levels

PHBs are merely building blocks for services Service providers combine PHB implementations with traffic conditioners provisioning strategies and billing models which enable them to offer services to their customers Providers and customers negotiate agreements with respect to the services to be provided at each customerprovider boundary These take the form of Service Level Agreements (SLAs)

End-to-end services can be constructed by the concatenation of domain services and their associated customer-provider SLAs for each of the domains which the service traffic has to cross However difficulties easily arise as the providers can freely decide what are the services available in their networks which in turn results in situations where the possible service level in some domain doesnrsquot match with the service level in the originating hostrsquos domain This will be a major issue in the development of diffserv in the future

Fig 6 Type of Service field in IPv4

Diffserv uses a 6-bit header field the DS-field In IPv4 the TOS octet and in IPv6 the

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 30: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

16

Traffic Class octet acts as the DS-field Even though the usage of TOS and Traffic Class octets are different from the usage of DS-field the differences are such that deployment of diffserv doesnrsquot cause serious impacts on existing applications

Fig 6 shows the coding of the TOS-field in IPv4 The coding of the Traffic Class octet in IPv6 is not specified but it is intended to be used in a similar manner as the TOS-field in IPv4 The precedence bits 0-2 are used for marking the importance of an IPv4-datagram over another Their usage and queuing requirements in IPv4 routers are specified in [5] Diffserv supports with limitation the widely used values or PHBs for backward compatibility The lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits have been specified to be marked in IPv4-datagrams that need special handling considering Delay Throughput or Reliability Backward compatibility with these bits will not be maintained in diffserv A node that interprets the precedence bits as they were originally supposed to is called a legacy node The value of the unused bits is ignored when employing diffserv

The bits 0-5 in the TOS-field make up the DS-codepoint Its value is in turn mapped to one or more PHBs as configured by a network administrator

What traffic conditioning basically means is treating traffic differentially so that traffic entering a DS-domain conforms to the rules specified in a TCA in accordance with the domainrsquos service provisioning policy In diffserv conditioning happens mostly in boundary nodes (routers) Conditioning can also be employed in interior nodes but it is not required for a network to be DS-capable When conditioning in an interior node is required only a subset of functioning of a boundary node may be needed Naturally interior nodes need to support forwarding based on the DSCP-values

Traffic conditioner is a part of a network node that takes the nodersquos ingress packets as its input and places the packets in its output in such an order that best satisfies the forwarding (ie service) requirements set for the packets and uses the networkrsquos resources in a best possible way From the service point of view traffic conditioners are the only elements needed for services to function How they are maintained is merely a question of administration and network management A traffic conditioner and its functional components are shown in Fig 7

Packet classifiers select packets from a traffic stream based on the content of some portion of the packet header The BA classifier (Behaviour Aggregate) classifies packets based on the DS codepoint only while the MF classifier (Multi-Field) selects packets

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 31: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

17

based on the values on one or more header fields such as source and destination addresses and port numbers DS field and protocol ID Classification is done for separating traffic aggregates from each other so that the amount of resources allotted to the aggregate is not exceeded

Fig 7 A traffic conditioner and its functional components

Packet meters measure the temporal properties of the stream of packets selected by a classifier against a traffic profile specified in a TCA A meter passes state information to a markershaper which shapes the traffic if it is out of profile

Packet markers receive the incoming packets from a classifier and the state information from a packet meter If some portion of incoming packets is out of profile a marker can re-mark those packets by another codepoint according to the state of a meter A marker can also be configured to mark all incoming packets to a single codepoint

Shapers delay some or all of the packets in a traffic stream in order to bring the stream into compliance with a traffic profile A shaper usually has a finite-size buffer and packets may be discarded if there is not sufficient buffer space to hold the delayed packets Droppers police the incoming stream by dropping packets that are out of profile

Traffic is conditioned on either end of a boundary link Boundary nodes may refer to the formed SLA if it is their responsibility to condition the ingressegress traffic to conformance with the appropriate TCA or is it the responsibility of the node in the other end of a boundary link However ingress traffic canrsquot be assumed to conform to the TCA but policing must be prepared to enforce the TCA

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 32: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

18

In addition to conditioning in boundary nodes it is recommended that traffic be conditioned as close to the sending host as possible This way potentially fewer classification and traffic conditioning rules need to be supported in the downstream DS domain and the requested service level is more likely to be met Therefore a limited number of conditioning rules (ie some but not all of the TC components in Fig 7 may be needed applied within the originating hostrsquos domain

A Per-Hop-Behaviour (PHB) is a description of the externally observable forwarding behaviour of a DS node applied to a particular DS behaviour aggregate PHBs provide the means by which a node allocates resources to behaviour aggregates Similar requirements for packet loss delay and jitter for example are the factors that make up a behaviour aggregate to which uniform forwarding behaviour is applied

A provider of differentiated services decides what are the available services in his network The services probably have some common characteristics so that they can be divided into few manageable groups The provider commits himself to providing physical resources so that his customers have the services they pay for Based on these decisions the provider selects the Per-Hop-Behaviours that are required for implementing the services as well as negotiates the Service Level Agreements with his customers Finally the PHBs and SLAs are configured into the networkrsquos nodes

PHBs are gathered into PHB groups for manageability and consistency PHBs are grouped based on their similar properties such as bandwidth requirements or behavioral characteristics mentioned above A single PHB is a special case of a PHB group The relationship between PHBs in a group may be in terms of absolute or relative priority PHBs are implemented in nodes by means of some buffer management and packet scheduling mechanisms that are configured into traffic conditioners

PHBs that are intended for wide usage are standardized within IETF Standardized PHBs have a recommended codepoint value which is set to packets that are wished to receive treatment specified within the PHB Multiple codepoint values may be mapped to a single PHB Every codepoint in use must be mapped to some PHB as the treatment for packets without specified local policy is either mapping to the default PHB (best-effort service) or discarding In addition to standardized ones there may exist only locally defined PHBs in a network These PHBs may be eg experimental or they may use a local service and traffic mapped to them is kept within the originating network As the

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 33: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

19

available space in the TOS-field (see Fig 6) is limited and room is left for its usage in the future the codepoint mappings can freely be made by network administrators within independent networks However this may require re-marking in the network boundary

DS-capable networkrsquos resources are provisioned to services by allocating suitable share of resources to each PHB group Proper allocation necessitates knowledge of impacts of one grouprsquos requirements to the others Resource allocation for individual PHBs within a group can be based on eg prioritizing one over another In PHB specifications themselves behavioral characteristics are given instead of implementation guidelines This leaves room for different implementation mechanisms for a particular PHB group Traffic conditioners control the usage of resources based on the administratively configured PHB groups and through enforcement of negotiated TCAs possibly in interaction with domainrsquos other nodes and TCs Special protocols a control entity and administrative actions may be needed for interaction with TCs This is outside the scope of diffserv

The simplest example of a PHB is the one which guarantees a minimal bandwidth allocation of X of a link to a behaviour aggregate over some reasonable time interval Although realizable a PHB is ought to provide wider ground for services that use it There are currently few proposed PHBs which are briefly presented in the following These PHBs provide the basis for service examples presented in previous paragraph As explained above none of the PHBs are mandatory for a node to be considered DS-compliant but when implemented the specifications must be met

Diffserv canrsquot be taken into use if it doesnrsquot provide backward compatibility The Default PHB exists for that Its purpose is to provide the best-effort behaviour that the current routers perform Default PHB is the one that is used for packets for which no other agreement exists The Default PHB (ie best-effort service) gets the lowest priority compared to all other PHBs Therefore any traffic that doesnrsquot conform to its profile can easily either be remarked to the Default PHB or be discarded

Traffic that is subjected to the Default PHB can be described to achieve the following kind of service the network will deliver as many of these packets as possible and as soon as possible depending on the prevailing network load and state A reasonable implementation of this PHB would be a queuing discipline that sends packets of this aggregate whenever the output link is not required to satisfy another PHB However to

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 34: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

20

ensure at least some bandwidth for hosts which donrsquot employ diffserv some resources may need to be reserved for Default behaviour aggregates

The recommended codepoint for the Default PHB is the bit pattern 000000 the value 000000 must map to a PHB that meets the specifications of Default PHB The codepoint chosen for Default behaviour is compatible with existing practice When a codepoint is not mapped to a standardized or local use PHB it should be mapped to the Default PHB

The mere Default PHB is not enough to provide sufficient backward compatibility The precedence bits (see Fig 6) of IPv4 TOS-field are widely used in existing networksrsquo equipment and must therefore be supported by diffserv The greater the value of the precedence bits is the higher is the priority of the packet This same method is applied in the Class Selector PHB so that the bit patterns xxx000 eight in all are reserved as a set of Class Selector Codepoints (see Fig8) Compatibility for lsquoDrsquo lsquoTrsquo and lsquoRrsquo bits is not provided The Class Selector PHB Requirements on codepoint 000000 are compatible with those listed for the Default PHB above Forwarding of each of the eight priority classes is done separately

Fig 8 Usage of DS-field in Class Selector PHB

The motivation behind the AF PHB is the need for fixed bandwidth lines that especially companies use extensively In a typical application a company uses the Internet to interconnect its geographically distributed sites and wants an assurance that IP packets within this intranet are forwarded with high probability as long as the aggregate traffic from each site does not exceed the subscribed information rate (profile) Packets

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 35: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

21

that are out of profile are forwarded with lower probability Irrespective of whether packets belonging to a same microflow are in or out of profile it is important they are not reordered

AF PHB group provides four classes of different levels of forwarding assurances and resources (buffer space and bandwidth) for IP packets received from a customer DS domain Packets from each of the classes are marked with one of three drop precedence values The customer or the originating DS domain does the marking in accordance with the customerrsquos subscription In a DS node the level of forwarding assurance of an IP packet thus depends on How much forwarding resources has been allocated to the AF class that the packet

belongs to Packets in one class are forwarded independently from the others and within the service rate (bandwidth) that has been configured for the class

What is the current load of the AF class In case of congestion within the class what is the drop precedence of the packet Packets with a

higher drop precedence value are more preferably discarded

Table 1 The four AF classes and the related three drop precedence values

Class 1 Class 2 Class 3 Class 4

Low Drop Precedence (1) 001010 010010 011010 100010

Medium Drop Precedence (2) 001100 010100 011100 100100

High Drop Precedence (3) 001110 010110 011110 100110

AF service classes presented in Table 1 are referred to as AF nm where lsquonrsquo marks the

number of the class (1-4 currently) and lsquomrsquo the precedence value (1-3 currently) Implementation of the AF PHB group sets certain requirements for packet queuing in networkrsquos nodes

One major group of services is the one requiring assured bandwidth with low loss low latency and low jitter in an end-to-end connection Such services are eg a point-to-point connection or a virtual leased line carrying time-sensitive data speech video or some combination of them EF PHB offers this type of service through DS domains

Loss latency and jitter are all due to the queues traffic experiences while transiting the

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 36: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

22

network By configuring nodes to meet the specifications of EF PHB traffic aggregates see no (or very small) queues and therefore have well-defined minimum departure rate In other words aggregates are independent of the intensity of other traffic at the node Queues arise when (short-term) traffic arrival rate exceeds departure rate at some node It is the network boundary traffic conditionersrsquo task to bind the rates for traffic aggregates such that at every transit node the aggregatersquos maximum arrival rate is less than that aggregatersquos minimum departure rate

The departure rate of an aggregatersquos packets from any diffserv node must equal or exceed a rate that has been specified for the service the packets belong to It is also required in the EF PHB specification that a network administrator must be able to configure the rate into diffserv nodes Codepoint 101110 has been recommended for the EF PHB

In the following two example services are presented one of which is built on the AF PHB group and the other on the EF PHB It must be born in mind that in general diffserv services are all for unidirectional traffic only and they are for traffic aggregates not individual microflows Another important aspect is the scope of a service which refers to the topological extent over which the service is offered For example a provider offers a service to one of its customers and the traffic from the customer enters the providerrsquos network at ingress interface A The service may then be applied to all traffic in one of the following ways

a) from ingress interface A to any egress interface b) from ingress interface A to a set of egress interfaces c) between the interface A and some egress interface B

There are some common things for both the examples First of all policers need to be

configured at traffic ingress points Secondly the used PHBs need to be implemented at core network equipment Better than best-effort (BBE) traffic has a higher priority over the competing best-effort traffic and thus provides reduced latency Quantitative performance measures canrsquot be given for BBE as it is clearly a qualitative service and depends on how resources are provisioned The scope of the service applies from a given ingress interface to any egress interface

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 37: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

23

Table 2 TCA specification for BBE service

Codepoint Service rate Applicable egress IF Handling of excess traffic

AF13 Mark 1Mbps Any Re-marked with AF 11 mark

BBE can be constructed as specified in Table 2 A provider offers the service with 1

Mbps aggregate rate Traffic within the 1 Mbps limit is directed to the AF13 PHB and excess traffic is re-marked to AF11 PHB One of the prerequisites for AF preserving the original order of packets is met when only one queue is used for implementing the both AF11 and AF13 PHBs The provisioning of the PHBs and how prioritizing AF13 over AF11 is done is up to the provider

This is a quantitative service which emulates traditional leased line service It promises to deliver traffic with very low latency and very low drop probability up to a negotiated rate Above this rate traffic is dropped Corporate VPNrsquos and IP telephony are two likely applications to use this service

Table 3 TCA specifications for Leased Line service

Codepoint Service rate Applicable egress IF Handling of excess traffic

EF-Mark 500 Kbps Egress point B Discard

EF-Mark 250 Kbps Egress point C Discard

This example considers a customer with three geographically dispersed networks

interconnected via a single provider network Customer attachment points are represented as A B and C Table 3 shows the TCAs for attachment point A which are included in a single SLA Customer has two leased lines established for interconnecting point A to point B and respectively point A to point C EF PHB is used for both of them with service rates 500 Kbps and 250 Kbps Excess traffic will be discarded

The provider needs to configure the policers at ingress point A for both the egress points B and C The policers are of MF-type as classification of packets is based on the codepoint value and the destination The routers along the ways from A to B and A to C

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 38: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

24

need to be provisioned to carry up to 750 Kbps of traffic in case both of the leased lines cross the same router

23 Related Work

Recently there has been a number of simulation work on marking and dropping policies that focused on a RED-based differentiated services approach

In [1] [2] Feng et al proposed adaptive priority marking schemes that focused on TCP-friendly mechanism in RIO scheme The authors presented two different Marking mechanisms source-integrated that the control engine so called packet marking engine (PME) is integrated with the host and source transparent marking that PME is potentially external to the user They showed results that the proposed scheme decreased TCP impacts over the Assured Service They however didnrsquot consider marking scheme for aggregated flows

In [9] through the medium of simulation work Ibanez and Nichols showed that TCP didnrsquot realize the minimum assured rate over the Assured Service The overall performance was affected by the bursty losses being experience by the OUT packets They concluded that it was ambiguous how the Assured Service could be characterized quantitatively for TCP application and an Assured Service couldnrsquot provide consistent rate guarantees

In [8] Ikjun and Narasimha proposed and evaluated several schemes to improve the realization of throughput guarantees in the Internet that employed differentiated services architecture They showed that excess bandwidth of a network was not distributed proportional to target rates and the impact of differences in RTTs could be reduced by the aggregation of sources and fair sharing of bandwidth at the edge routers

In [28] Azeem et al proposed that the TCP-friendly marker was designed to protect small windows from drops and maintain optimal spacing between IN and OUT packets The authors via simulation work presented results showing that the TCP-friendly marker reduced the degree of token over-provisioning required to provide guarantees

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 39: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

25

CHAPTER 3 TWO MARKERS SYSTEM

In this chapter we focus on several strategies used to mark packets in order to consider TCP dynamics and adapt fairness for sharing a bottleneck link bandwidth of a network and propose a modified marking scheme so called Two Markers System (TMS) the first marker (TMS_I) is located at sources of a network to adapt TCP congestion control algorithm and the second marker (TMS_II)is placed at edge so as to fairly mark the aggregated flows Before delving into the details of the theory and implementation of the Two Markers System however let us first the broader view and consider what types of marking might be offered

Marking strategies can be classified into two categories based on the standards used for the marking [13] Module of devices can mark packets (i) based on the state of all individual flows of an aggregation called per-flow marking (ii) based only on aggregation state without any information about individual flows called per-aggregation marking However both strategy have faults as follows First since marking module in per-flow marking give each flow a fraction of the target rate of the aggregated traffic it can lead to an inefficient utilization of the reserved bandwidth In this case ldquoidlerdquo flows waste their shares while preventing ldquoactiverdquo flows from increasing Next the per-aggregation marking can introduce unfairness among aggregated flows because the marking module marks the aggregated flows without any information on individual flows The unfairness can be caused by different link bandwidth different round-trip-times different target rates or different levels of congestion experienced by individual TCP flows within the network

Two Markers System takes the above problems into consideration

31 The framework of Two Markers System

Whatrsquos the Two Markers System (TMS) This system has two marking modules which are located in the source (Two Markers System_I TMS_I) and at the edge (Two-Markers System_II TMS_II) of differentiated services network and each marking module plays

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 40: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

26

different roles to achieve the reservation rate or the target rate over Assured Services in a diffserv network Each marking module however closely cooperates with the other marking module and it consists of a meter and a marker Figure 9 illustrates the framework of ldquoTwo Markers Systemrdquo in a diffserv network

Fig 9 The ldquoTwo Markers Systemrdquo framework

First the TMS_I carries out two main roles One is to control network dynamics and the other is to provide the components of fairness strategy viz the marking rates to TMS_II TMS_I is integrated with TCP source that has two windows identified according to packets that it marks [2] In-profile window (iwnd) and Out-of-profile window (ownd) Congestion window size in the Two-window TCP of TMS_I is the summation of iwnd and ownd The TMS_I determines the window (iwnd or ownd) and reduces its size according as whether the packets dropped are IN-marked or OUT-marked whenever congestion in a network occurs and so the TMS_I decreases TCP impacts in the underlying AF services In [2] it showed that a source-integrated packet marking engine (PME) properly kept up marking rate rather than a source-transparent PME because the measurement of throughputs against the reservation rate at the source is accomplished more exactly than at the edge of a network The TMS_I also adequately marks packets to realise the reservation rate of the source integrated with it and each host pours the packets into the edge of the network Information of the marked packets is made use of allocating the bandwidth (or excess bandwidth) to aggregated flows in the TMS_II of traffic conditioner which helps the edge-embedded marking module to properly mark packets So to speak TMS_I can be not only a marker used to mark packets in order to

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 41: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

27

classify the service in the core of a network but also an indicator also called TCP-source marking module that notifies TMS_II in the edge of a network of the marking rate

Second TMS_II called edge-embedded marking module of the traffic conditioner as shown in figure 10 at the edge of the network monitors traffic flows incoming from users against the profile that they have agreed with the service provider The meter of TMS_II measures the number of the marked packets (or marking information) from sources and then the marker re-marks aggregated flows for the profile if the total throughput is less than the capacity of a network In this case we elaborate a fairness strategy for sharing excess bandwidth of a bottleneck link TMS_I increases (or decreases) In-profile window (or Out-of-profile window) size in order to realise reservation rate of source where increasing or decreasing of marking rate represents the level demanding for resources Therefore the key point of fairness strategy is the marking rate (ie information for a demand of bandwidth) of traffic flows from users TMS_II re-marks (or marks) aggregated flows for total target rate of all the flows based on the probability of marking from sources That is TMS_II adjusts the probability of marking so as to allocate resources to each flow as much as it requires

Fig 10 The design of the Two Markers System architecture

32 The crux of fairness strategy

Note that in this system sourcesrsquo target rates are different from the edgersquos target rate Let Ri denote the reservation rate of a flow i i =1 2hellip n and Cj represent the capacity of a bottleneck link j j = 1 2 hellip m in the network The target rate of each source is a

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 42: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

28

reservation rate (Ri) of which user informs the edge in order to be assured of bandwidth for delivering data On the other hand a flowrsquos target rate (Ti) in the edge of a network is the reservation rate (Ri) plus variable value that is the fairness component (Xmi) to the excess bandwidth (E) we should consider the scope of a service (Service Level Agreement Scope SLA Scope) on the scalability view of diffserv as follows [17]

(i) all traffic from ingress point S to any egress point (ii) all traffic from ingress point to a set of egress point (iii) all traffic between ingress S and egress point D (iv) all traffic between any ingress point and egress point D

The capacity of a network in the first three case of service scope above is sum=

=m

jjCC

1 j

denotes a bottleneck link of a network Therefore the excess bandwidth at this bottleneck link is then given

In Edge to edge SLA namely the third case of service scope the capacity of a network correspond to m = 1 and so the excess bandwidth is

The last case of service scope tied to the receiver however requires the modification of term in the reservation rate (R) an ingress point has to know other ingress pointsrsquo reservation rate so as to determine the excess bandwidth of a network Thus the

information of reservation rate received at an ingress point becomes to sum sum= =

=l

k

n

iikRR

1 1 and

where k k = 1 2 hellip l represents kth ingress point The excess bandwidth is given

sumsum==

minus=minus=n

ii

m

jj RCRCE

11

(1)

sum=

minus=minus=n

iiRCRCE

11

(2)

sum sum= =

minus=minus=l

k

n

iikRCRCE

1 11

(3)

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 43: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

29

Where lth ingress point has to be aware of reservation rates as many as l-1 ingress points While such a service is theoretically possible it appears to be a mismatch with the more usual specification of Differentiated Services which governs the quality with which traffic is sent rather than received Therefore Two Markers System is able to be implement in variable SLA scope but we assume that the scope of a service in the paper includes the first three cases except the last with the reason above A flowrsquos target rate in the edge of the network is given by

Where Xmi is proportional to the probability of marking from a source That is it is determined depending upon fraction between the marking rate of each flow and the average marking rate (4) Development for Xmi is as follows First the probability of marking for each flow i is the number of the marked packets (Nmi) by throughput Thri or In-profile window by congestion window

Where iwnd and cwnd represent window of the IN-marked packets and congestion window respectively Next the average marking rate E[mi] of all the flows at the edge of a network is

Where Markp also represents the probability of marking in TMS_I algorithm of the section 33 Finally Xmi is developed by using the fraction between (3) and (4) that is Xmi

)( summinus+=times+= imiimiii RCXREXRT (4)

cwndiwnd

ThrN

mi

mii ==

(5)

sum=

=n

iii m

nmE

1

1][

sum=n

iicwnd

iwndn

)(1

sum=n

iipMark

n)(1

(6)

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 44: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

30

represents a relative value against the datum point E[mi] The fraction of Xmi varies from 0 to 1 depending upon mi and E[mi]

Consequently this value stands for excess bandwidth that each flow requires to complete its target rate

33 Adequate-Marking strategy Two Markers System-I algorithm

In this section we focus on the marking algorithm so called the Adequate-Marking strategy that is implemented in the TCP-source marking module (TMS_I) As stated before congestion window (cwnd) consists of two windows in this algorithm [8] In-profile window and Out-of-profile window In-profile window represents the number of IN-marked packets per-RTT needed to achieve the reservation rate that is guaranteed by a contract with Bandwidth Broker Out-of-profile window signifies the number of OUT-marked packets per-RTT that is outstanding Both windows have their respective threshold viz issthresh and ossthresh

Program code 1 and 2 (Figure 11 and 12) illustrate the modified TCP congestion control algorithm Some of the earlier work [2 8] has focused on similar issues in networks where marking policies are different from the one studied here We ameliorate TCP congestion control algorithm based on [2] and proposes new approaches to improving the realisation of bandwidth guarantees The idea behind the Adequate-Marking strategy is to realise the target rates of sources That is to say a TCP-source marking module keeps up marking rate as much as the bandwidth source wants to guarantee

][ i

imi mnE

mX =

sum

= n

ii

i

m

m

(7)

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 45: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

31

Fig 11 Program code 1 TMS_I algorithm (cwnd open)

In the starting point of the algorithm shown in Figure 11 how can we determine the initial probability of marking (Markp(init)) It is generally determined depending upon fraction between the measured (Thr) and target throughputs (Ti = Ri) as follows

When new data is acknowledged

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(Thr lt Ri)

if(iwnd lt issthresh)

iwnd += Markp

else

iwnd += 12cwnd

if(ownd lt ossthresh)

ownd += (1- MarkP)

else

ownd += 12cwnd

else

if( 0 lt iwnd)

if(iwnd lt issthresh)

iwnd -= 1-Markp

else

iwnd -= Markp

else ownd = cwnd

if(ownd lt ossthresh)

ownd += 1

else

ownd += 1cwnd

cwnd = iwnd + ownd

Markp = iwndcwnd

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 46: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

32

Where it represents the initial rate that user wants to guarantee in order to realise its target rate

The window size increases or decreases according to the measured throughput against the reservation rate That is a TMS_I proportionally marks packets according as whether the measured throughput (Thr) at the TCP source reaches its target rate or not If the measured throughput doesnrsquot reach sourcersquos target rate the TMS_I increases iwnd size as much as the marking rate needed Otherwise it increases ownd size so as to allow the other sourcersquos traffics or best-effort traffics to share the excess bandwidth Although the proposed congestion algorithm has two windows ie iwnd and ownd they act like the one window as traditional congestion algorithm the summation of iwnd and ownd is cwnd Therefore the increment of iwnd leads to the decrement of ownd to the degree When either window becomes above its threshold size issthresh or ossthresh it increases linearly in this case congestion avoidance takes over and slowly increases each window size to grope for more bandwidth becoming available on the path Note that the summation of increasing rate of each window size during digging out resources is 1cwnd

The else clause in the twelfth line of program code 1 describes the fairness strategy for sharing the excess bandwidth among other sourcersquos TCP traffics and best-effort traffics and also considers the second TCP dynamics (so called boomerang effect) which is caused by excessively dropping IN-marked packets at the core of a network when the connection of the network is congested If the throughput measured at a source for instance exceeds its reservation rate and iwnd exists TMS_I properly decreases iwnd size according to threshold in order to allocate the excess bandwidth to the other flows and keep IN-marked packets from being immoderately dropped at the core of a network In fact a TCP-source marking module (TMS_I) properly keeps up marking rate rather than the existed marking module such as Time Sliding Window Two Color Marker (TSW2CM) While the established edge marker for example is fairly effective in maintaining the observed throughput close to the reservation rate it often marks more packets than required The over-marked packets are often dropped at the core of a network when congestion occurs and so source mistaking it for an acute congestion

ThrRinitMark i

p minus= 1)( (8)

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 47: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

33

situation frequently adjusts its sending rate That is if the loss of IN-marked packets occurs frequently the congestion window size will be maintained low Each source in the end doesnrsquot achieve its target rate On the other hand because the TMS_I recognize packets loss and determine whether the lost packet was sent as a marked or an unmarked packet it can properly regulate marking rate according to the condition of a network Thus the probability that IN-marked packets are excessively dropped at the core of the network is reduced and congestion window will not be often decreased unlike the established edge marker

Fig 12 Program code 2 TMS_I algorithm (cwnd close)

When the third duplicate ACK in a row is

iwnd = Markp cwnd

ownd = cwnd - iwnd

if(out-marked packet loss)

ossthresh = ewnd2

ownd = ossthresh

else

cwnd = cwnd2

issthresh = Markp cwnd

ossthresh = cwnd - issthresh

iwnd IN-profile window

ownd Out-of-profile window

cwnd congestion window

Markp the probability of marking

Thr throughput of a flow

Ri Reservation rate of ith flow

Ti target rate of ith flow

issthresh threshold of iwnd

ossthresh threshold of ownd

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 48: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

34

The code in Figure 12 illustrates the algorithm that reduces cwnd size when occurring the congestion After the third duplicate ACK in a row is received Out-of-profile window (ownd) reduces the half of its window size whenever OUT packets loss occurs It indicates the temporary congestion of a network On the other hand the loss of IN-marked packets indicates acute congestion in the network and then congestion window reduces the half of its window size Therefore the modified TCP congestion algorithm compensates the loss rates that are caused by the conventional congestion algorithm without the distinctive mechanism for reducing congestion window

The goals of the TMS_I provide modified congestion control that alleviates TCP-dynamics over AF services and inform the TMS_II of the probability of marking (MarkP) that is needed to determine a new datum point (Xmi) for fairly sharing the bottleneck bandwidth recall the numerical expression (7) We deal with usage of the datum point of the edge-embedded marking module in subsection 24 Consequently the strategy providing the datum point included the information of user is able to alleviate the problem of introducing unfairness within aggregated flows on the per-aggregation marking

34 Marking rate-based Fairness strategy Two Markers System-II algorithm

The edge-embedded marking module (TMS_II) elaborates a new marking strategy based on marking rates of sources in order to fairly share bottleneck bandwidth In this case it should closely cooperate with TCP-source marking module to accomplish the fairness strategy called marking rate-based fairness strategy The meter of TMS_II should measure the probability of marking and compute the expected value of marking rates that are used in the fairness component as stated in the subsection 22 Notice that fairness is defined as how well the throughput of all individual flows of aggregated traffic realises their target rates (Ti) That is TMS_II develops the proportional fairness based on the number of IN marked packets which represent target rates each flow wants to guarantee from sources

We assume that an edge router receives information of the reservation rate of which each user wants to be assured and contracts with the service provider (or bandwidth

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 49: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

35

broker) for bottleneck link bandwidth of a network We also consider the two network conditions as follows over-provisioned network and under-provisioned network In this mechanism the edge router sets the target rate different from the source Recall the numerical expression (4) to mind As stated in the subsection 22 the target rate of each flow at the edge of a network includes excess bandwidth in proportion to the probability of marking from the sources if condition of the network is over-provisioned It is equal to reservation rate of source otherwise Since the magnitude of marking rate at the sources represents increase or decrease in demand for bandwidth the number of IN marked packets is used the standard of fairness strategy that is elaborated for sharing the excess bandwidth If the number of IN-marked packets incoming from a source for example exceeds the threshold value that is the average marking rate E[mi] it means that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore it is marked more as IN and has a higher target rate than others Program code 3 (Figure 13) illustrates the TMS_II algorithm that takes the marking rate-based fairness policy and a congestion situation into consideration

First of all TMS_II observes the total throughput of aggregate flows and compares it with the bottleneck link bandwidth(C) of the network This comparison has two objectives One is to consider a situation of congestion the other is to ensure the coherence of the Adequate-marking strategy In fact a situation of congestion is judged by comparing the iwnd size (or ownd size) in N times with iwnd size in N-1 times We consider three situations according to the condition of a network as follows (i) congestion (ii) congestion but not acute and (iii) serious congestion

First if ownd size in N times isnrsquot equal to the half of ownd size in N-1 times this

represents congestion in a network doesnrsquot occur The If clause in the first through seventh line of program code3 describes the case stated above and it shows the mechanism that the TMS_II fairly allocates the excess bandwidth of the bottleneck link to TCP flows After decision of congestion it again measures whether the throughput of a flow i (Thr_i) realises the target rate (Ti) viz the crux of the fairness strategy or not If the throughput doesnrsquot reach the target rate the marker increases marking rate as much as α called a fairness factor In the case the fairness factor α is computed as follows

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 50: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

36

i

iThr

T_1 minus=α Other wise it decreases marking rate as much as α to provide other

flows with the chance of sharing the excess bandwidth

Fig 13 Program code 3 TMS_II algorithm

if(ownd(N) = ownd(N-1)2)

For i -gt i=1 to i =n

Ti = Ri+(Xmi E))

if(Thr_i lt Ti)

Mi += α Mi

else

Mi -= α Mi

else if(iwnd(N) = iwnd(N-1)2)

exit

else (iwnd(N) == iwnd(N-1)2)

For i -gt i = 1 to i = n

Ti = Ri

if(Thr_i lt Ti)

Mi += β Mi

else

exit

Mi marking rate of ith flow

Ti target rate of ith flow

Ri reservation rate of ith flow

E Excess bandwidth

ownd (N) newly measured OUT window

ownd (N-1) measured OUT window before N times

iwnd (N) newly measured IN window

iwnd (N-1) measured IN window before N times

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 51: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

37

Next if ownd size in N times is equal to the half of ownd size in N-1 times and iwnd size in N times isnrsquot equal to the half of iwnd size in N-1 times congestion in the network

happens but is not serious The else if clause in the eighth and ninth lines represents the instance that the edgersquos marking rate is equal to the marking probability of the sources (or marking information) and so re-marking process is not required

Finally if iwnd size in N times is equal to the half of iwnd size in N-1 times this means IN-marked packets is being dropped in the core of a network and congestion is acute

Marking mechanism for the congestion situation appears in the else clause from the tenth to the sixteenth lines Note that if the total throughput is more than the capacity C the target rate of each flow becomes its reservation rate In the under-provisioned network each flow often fails to consummate its target rate owing to the repeated reduction of sending rates through congestion control mechanism Therefore TMS_II compares the measured throughput with reservation rate in order to complement the loss rate and increases marking rate as much as β so called a complement factor if the throughput doesnrsquot realise the reservation rate of user In such instance the complement factor β is

same as i

iThr

R_1 minus=β Otherwise re-marking operation doesnrsquot perform

In short sources inform an edge-embedded marking module of an adequate marking rate then it allocates fairly bandwidth to flows based on the marking rate of each source Therefore we can say that marking rate-based fairness strategy alleviates the problems of leading to the inefficient utilization of the reserved bandwidth on per-flow marking and to the unfairness within aggregated flows on per-aggregation marking

35 The issue of re-marking

TMS_II elaborates the fairness strategy for TCP flows by means of re-marking process When it increases or decreases marking rate OUT-marked packets turn into IN and IN-marked packets change into OUT The re-born packets are treated by RIO queue at the core of a network unlike what sources anticipate If the re-born OUT-marked packets (IN-OUT) that were IN at sources for example are dropped at the core in the time of temporary congestion TCP sources mistaking for acute congestion will reduce the half of

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 52: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

38

iwnd size Therefore TMS_I well doesnrsquot alleviate the TCP dynamics over assured service On the other hand if the re-born IN-marked packets (OUT-IN) sent as OUT by sources are removed due to grave congestion TCP sources mistaking for temporary congestion will decrease the half of ownd It retards in getting rid of congestion

In this case TMS uses three drop precedences IN (DP0) IN-OUT or OUT-IN (DP1) and OUT (DP2) IN-OUT is packet marked as OUT at the edge of a network but IN at source and OUT-IN is counterpart We assume that the drop precedence source decides must not change and the strategy that TMS_II elaborates has to be deliberated The order of dropping packets is as follows (i) OUT (ii) OUT-IN or IN-OUT and (iii) IN These in terms of colors correspond to green yellow and red respectively The occasion of re-marking can be divided into three situations First if congestion window doesnrsquot reduce the half of its size and the throughput measured at the edge of a network doesnrsquot complete the target rate TMS_II re-marks OUT-marked packets at sources into IN so as to guarantee the remain resources In the instance three drop precedences IN OUT-IN and OUT flow into the core of a network and packets are dropped in order as follows OUT OUT-IN and IN When OUT-IN is got rid of TCP source decreases the half of ownd size Next it turns IN-marked packets at sources into OUT if congestion window doesnrsquot reduce the half of its size and a flow already accomplishes its target rate and so drop precedence consists of IN IN-OUT and OUT TMS_II lowers the drop precedence of OUT in a step and so it allows other flows to share the excess bandwidth RIO at the core of a network drop packets in order as follows OUT IN-OUT and IN TMS_I reduces the half of congestion window when IN-OUT is removed Finally if congestion window is equal to the half of its size and a flow doesnrsquot achieve its target rate yet it re-marks OUT-marked packets into IN and RIO deal with three drop precedences IN OUT-IN OUT at the core of a network When OUT-IN is dropped TMS_I diminish the half of ownd size It tries to compensate the loss rate occurring in the time of congestion through lowering the drop precedence of OUT in a step

36 The issue of interaction between TCP and UDP flows

In the section we take into consideration interaction between TCP and UDP flows over

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 53: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

39

same AF class Many applications that are candidates for quality of service are real-time applications that do not require a reliable transport protocol such as TCP They run over UDP as transport protocol and are non-responsive to congestion indication through loss In particular we are concerned that if TCP and UDP are both given the same AF class TCP flows will reduce their rates on packet drops while UDP flows wonrsquot change and get the entire excess bandwidth shown in [8] [9] [11] [14] in an over-provisioned network TCP flows will be also penalised more than non-responsive UDP flows with respect to service degradation in an under-provisioned network and it is impossible for all target rates to be completed

UDP flows in the TMS are dealt with out-of-profile to be taken at a disadvantage while TCP flows adapt to network dynamics as being adjusted marking rate through scaling factors α and β In the view of the core of a network RIO queue always drop preferentially UDP flows marked as OUT (red-DP2) but lower the drop precedence of TCP packets as many as scaling factor more than them Therefore it tries to guarantee the target rates of TCP flows as allocating excess bandwidth to them that UDP flows almost dominated Consequently TMS will compensate the grave performance degradation that network dynamics bring about

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 54: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

40

CHAPTER 4 SIMULATION AND ANALYSIS

41 TCP dynamics over Assured Services in a diffserv network

In this section we present the simulation results with two marking algorithms we have described in the previous section The simulation was done with the network simulator-2 (NS-v21b8a) For the sake of simulation we used a network with the configuration shown in Figure 14

Fig 14 Simulation topology I using the Two Markers System

In the simulation we have 10 sources (1 through 10 counting downwards) that communicate with one of ten different destinations and conducted two scenarios oversubscribed and non-oversubscribed network In the first scenario the capacity of the bottleneck link is fixed the aggregate reservation rate is adjusted as changing the number of flows with the target rate of 3Mbps or fixed below it so that the bottleneck is not oversubscribed In the second scenario the aggregate reservation rate increases above the bottleneck capacity so that the bottleneck link experiences congestion We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280ms The sources are all TCP-Reno sources (unless specified otherwise) We compare two marking scheme in all the experiment Time Sliding Window Two Color Marker (TSW2CM) and Two Markers System (TMS) For the RIO implementation of TSW2CM the routers use RED with the values of 100 packets 400packets and 002 for min_in max_in and Pmax_in

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 55: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

41

and 50 packets 400 packets and 01 for min_out max_out and Pmax_out RED parameters in the TMS show in Table 1 and we set a queue weight of both schemes 0002

Table 4 RED parameter settings in the TMS

min_th max_th Pmax

DP0 100 400 002

DP1 100 400 005

DP2 50 400 01

First of all we investigate the simulation results with Adequate-marking algorithm as the role of the TCP-source marking module Figure 15 illustrates the results of an experiment with ten connections from S1-D11 to S10-D20 with target rate of 3Mbps respectively and the capacity of bottleneck link is set to 20Mbps Five flows from S1 through S5 start at time t = 10 (s) followed by two flows flow6 and flow7 at t = 120 (s) and the last three flows flow8 through flow10 at t = 220 (s) but sink at t = 350 First an interval of t = 10 (s) through t = 110 (s) represents an over-provisioned network and five flows compete for the excess bandwidth of 5Mbps Marking operation initially performs but temporary marking for sharing the excess bandwidth does Second in runtime form t = 120 (s) to 210 (s) with soft congestion seven flows try to achieve the bandwidth of 3Mbps ideally Marking in the interval is practically carried out so as to realise the reservation rate of user Next grave congestion appears from t = 220 (s) to t = 350 all the flows do their best for completing target rates but fail to Marking rate in the period performs more excessive than others Last in an interval of t = 330 (s) to 400 (s) TMS momentarily over-marks packets owing to congestion after flow8 through flow10 sink but it again properly marks them Since TCP windowing restricts the aggregated flows from competing for the excess bandwidth the established edge marker continuously over-marks its packets as shown in Figure 15 (a) It shows somewhat irregular marking curves However since TMS_II adjusts marking rate based on the Adequate-marking algorithm of sources Figure 15 (b) shows that the number of marked packets to accomplish the target rate is fewer than in the case of Figure 15 (a) In both graphs below flows of (a) and (b) demonstrate a striking contrast between excess marking and Adequate-marking

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 56: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

42

While all the flows in the Figure 15 (b) mark each packet of a flow complying with the changing condition of the network those in the Figure 15 (b) do not so

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7

flow8

flow9 flow10

a) Time Sliding Window Two Color Marker

005

115

225

335

445

555

6

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

flow1 flow2 flow3 flow4flow5 flow6 flow7 flow8flow9 flow10

b) Two Markers System

Fig 15 Adequate-marking algorithm the vertical axis represents marking rates and the horizontal axis does the simulation time performed The situation of a network dynamically changes as increasing the number of flow as follows over-provisioned temporary congestion and serious congestion

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 57: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

43

Second we study for realising the target rate and sharing resources between two flows with a different feature of traffic respectively through governing marking rate S1 is TCP flow being sent infinitely with the target rate of 3Mbps and S2 is TCP flow turning on and off with a 3Mbps target bandwidth per 50 seconds In the experiment bottleneck link capacity is 55Mbps As shown in figure 16 (a) S1 slowly reacts to changes in the network and TSW2CM doesnrsquot adequately regulate marking rate behind the changes in the network load slowly rising in response to an increased traffic load and slowly falling in response to a decreased traffic load Therefore whenever S2 turns on S1 is deprived of its resources as much as the increment of it S1 doesnrsquot also fairly share bandwidth of a bottleneck link with S2 The simulation was repeated to investigate the effect of TMS as shown Figure 16 (b) Since TMS can control marking rate through scaling factors of α and β so as to adapt to network dynamics the probability of marking rate of S1 rapidly responses to the changed network load Whenever S2 for instance turns on marking rate of S1 fast rises (or falls) in response to an increased (or decreased) traffic load to achieve its target rate That is when more bandwidth is needed to achieve the target rate packets are marked more However packet-marking is quickly turned off quickly when it detects that the available bandwidth is sufficient to satisfy the target rate As expected S1 and S2 flows achieve its target rate during periods of increased traffic load and fairly share the bandwidth of bottleneck link rather than in Figure 16 (a)

Finally we present the simulation results with marking rate-based fairness algorithm described in the subsection 24 We set reservation rate of each flow to 5Mbps and the capacity of bottleneck link to 53Mbps namely under-subscribed network The experiment performed in run time from t =0 (s) to 500 (s) Figure 8 represents the results that the average throughputs of all individual flows of aggregated traffic realise their target rates with the parameters above Although the RTT of aggregate flows is differently set up the average throughput of them in both schemes as shown Table 1 is similar to each other Each flow in Figure 17 (a) often fails to achieve their target rates because it receives bandwidth without adequate marking rates about the changed network load That is to say TSW2CM marks packets based on only the traffic profile without any knowledge about individual flows However in Figure 17 (b) each flow satisfies its reservation rate and shares the excess bandwidth of the bottleneck link according to the probability of his marking if the total throughput is less than the capacity of a network As

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 58: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

44

stated before since TMS regulates drop precedence in proportion to the probability of marking from the sources all the flows are able to adapt to network dynamics

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

8

0 50 100 150 200 250 300 350 400

Simulation time (s)

Ban

dw

idth

(M

bps)

S1 (total throughput)

S2 (total throughput)

S1 (marking rate)

-1

b) Two Markers System

Fig 16 Marking Packet through scaling factor the vertical axis represents throughput and marking rate of each source and the horizontal axis signifies the simulation time performed

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 59: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

45

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1

flow2

flow3flow4flow5

flow6 flow7 flow8

flow9

flow10

a) Time Sliding Window Two Color Marker

0

1

2

3

4

5

6

7

1 2 3 4 5 6

Aggregate flow

Bandw

idth

(M

bps)

flow1 flow2 flow3

flow4flow5flow6flow7

flow8 flow9 flow10

b) Two Markers System

Fig 17 Throughput for marking rate-based fairness strategy the vertical axis represents the achieved throughput and the horizontal axis signifies the number of simulation performed

Two Markers System showed the improvement of performance about several parameters we used to simulate However an implementation of TCP source marking

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 60: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

46

modules at source causes an issue of modifying TCP source in a differentiated services network

Table 5 The average throughput of aggregate flow as increment of RTT

Aggregate flow(No) 1 2 3 4 5 6

RTT (ms) 80 100 130 170 220 280

E[Thr] of TSW2CM (Mbps) 48989 49703 49183 49391 49247 49098

E[Thr] of TMS (Mbps) 52908 52207 52373 51944 52763 52858

42 Interaction of TCP and UDP flows

In this section we present the simulation results for TCP and UDP traffics in Two Markers System we have described in the previous section For the sake of simulation we used a network with the configuration shown in Figure 18 In the experiment we have 6 sources (1 through 6 counting downwards) that communicate with one of six different destinations Long-lived packet streams generated by an infinite file transfers are originated at source 1 through 4 and destined to source 7 through 10 Constant rate UDP packet streams are originated at source 5-6 and destined to source 11-12 We carried out two scenarios over-provisioned and under-provisioned network That is to say the capacity of bottleneck link changes while the target rate of source fixes In the first scenario the aggregate reservation rate is 6Mbps and the bottleneck capacity is set to 8Mbps so that the bottleneck is over-provisioned In the second scenario the aggregate reservation rate is 6Mps and the bottleneck capacity is also set to 3Mbps so that the bottleneck link experiences congestion The UDP flows source traffic at the rate of 1Mbps We assume that the RTT without queuing delay of each flow is randomly pocked from 80 to 280 ms The sources 1 through 4 are all TCP-Reno sources (unless specified otherwise) For the RIO implementation we make use of parameters of section 41

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 61: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

47

Fig 18 Simulation topology II using the Two Markers System

First of all we investigate the simulation results of the issues for sharing excess bandwidth according to the link utilisation For the sake of experiment we elaborated the strategy with increasing TCP flows with the target rate of 1Mbps from source 1 to 4 while UDP is constant to 1Mbps respectively and the capacity of the bottleneck link turn 8Mbps into 3Mbps in order to take consideration of network dynamics We use the link utilisation (ρ) as fallows [16]

CTisum=()ρ (9)

Where Ti and C represent target rate of a flow and the capacity of a bottleneck link in the network respectively An interval of ρ = 100 () through 200 () as shown in Figure 19 represents the change of the capacity of the bottleneck link viz an under-provisioned state and in the figure the excess bandwidth (ExBW) a flow receives is expressed as percentage of its target rate (T)

TExBWExcess BW=() (10)

Therefore excess bandwidth of 0 () denotes the target rate of each flow First as shown Figure 19 (a) we deliberate TSW2CM behind the change of traffic load As the traffic load in a network rises the TCP flows suffer acute decline compared to UDP flows and fall under their target rates at saturation point (80) of network load as shown the Figure

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 62: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

48

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

a) Time Sliding Window Two Color Marker

-100

-75

-50

-25

0

25

50

75

100

125

150

175

200

30 50 70 90 110 130 150 170 190

Link utilization ()

Exc

ess b

andw

idth

(

) TCP (average)

UDP (average)

b) Two Markers system

Fig 19 Sharing excess bandwidth according to the link utilization the vertical axis represents the excess bandwidth as percentage of target rate and the horizontal axis signifies link utilisation as percentage

However since UDP sources flow traffics into network without any penalties they complete their target rates in the all condition and consume all the excess bandwidth in

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 63: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

49

under-provisioned state The degradation UDP flows experience is only diminishment of excess bandwidth they receive In the Two Markers System we assume as fallows (i) UDP flows in TMS are only dealt with OUT marking (DP2-red) (ii) the magnitude of marking rate represents increase or decrease in demand for bandwidth If the number of marked packets for example exceeds the threshold value namely the average marking rate E[mi] the edge-embedded marking module considers that the flow wants more bandwidth than others in order to achieve its reservation rate Therefore the flow is marked more and has a higher target rate than others The distribution of excess bandwidth in the over-provisioned network as shown Figure 19 (b) is fair rather than in the Figure 19 (a) That is to say the idea behind TMS is to preferentially drop UDP and TCP packets marked OUT which are outside of their contract when congestion occurs rather than OUT-IN or IN-OUT The excess bandwidth in the Figure 19 (b) is percentage-wise almost equally divided between TCP and UDP flows in the link utilisation ρ = 60 () and over Both traffic types achieve their target rates in the over-provisioned network and both suffer the same level of service degradation in the under-provisioned network

Next we present the simulation results considering of marking rate-based fairness strategy with UDP and TCP flows We set that reservation rate of each flow is 1Mbps and compare two marking schemes One is the TSW2CM the other is marking strategy of TMS Figure 11 shows the results that the average throughputs of all individual flows of aggregated traffic realise their target rates in over-provisioned and under-provisioned networks with the capacity of bottleneck link of 8Mbps and 3Mbps respectively Setting of RTT parameters follows the strategy of subsection 41 Also the average throughput of all the flows in both schemes is similar to each other First we take into consideration the results of the experiment in an over-provisioned network In the figure 20 (a) while TCP flows only try to accomplish their target rates of 1Mbps UDP flows captured all excess bandwidth of about 05Mbps On the other hand TCP flows in the figure 20 (b) fairly share the excess bandwidth of about 03Mbps as elaborating the different marking strategy against UDP flows All the flows also satisfy its reservation rate and share the excess bandwidth of the bottleneck link according to the probability of his marking in over-provisioned network Next Each TCP flow in Figure 20 (a) often fails to achieve their target rates in under-provisioned networks because UDP flows are transmitted constantly irrelative to congestion That is UDP gains unfairly at the advantage of TCP

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 64: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

50

flows However Figure 20 (b) shows that each flow in under-provisioned network is fairly distributed (about 05Mbps) rather than in the Figure 20 (a) by dint of dealing with UDP packets as out-of-profile Consequently UDP flows are penalised more than sensitive TCP flows as shown Figure 20 (b)

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)

TCP2 TCP3

TCP4

TCP1 (under-provisioned)TCP2 TCP3

TCP4 UDP1 (over-provisioned)UDP2

UDP1 (under-provisioned)

UDP2

a) Time Sliding Window Two Color Marker

0

025

05

075

1

125

15

175

2

1 2 3 4

Aggregate flow

Bandw

idth

(M

bps)

TCP1 (over-provisioned)TCP2

TCP3

TCP4 TCP1 (under-provisioned)TCP2

TCP3 TCP4

UDP1 (over-provisioned)

UDP2

UDP1 (under-provisioned)

UDP2

b) Two Markers System

Fig 20 Throughput according to network conditions the vertical axis represents the target rate and the horizontal axis signifies aggregate flows

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 65: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

51

CHAPTER 5 CONCLUSION

This thesis has focused on the issues of TCP and UDP flows over Assured Forwarding (AF) Per-Hop Behavior (PHB) in the differentiated services (DS) architecture and shown what Two Markers System can apply in the variable SLA Scope In this case two important issues were on the rise and many researches have been performed until now One is TCP dynamics over AF PHB in a diffserv network the other is the interaction between TCP and UDP flows in the AF PHB The former leads to failing to realize userrsquos target rate owing to weaknesses in the TCP congestion control algorithm and the latter gives rise to the problem of fairness for allocating the excess bandwidth between the two flows due to overeating of UDP flows

The strongest point of the Two Markers System proposed in this paper is that two marking modules are able to properly mark packets and fairly allocate the bandwidth of a bottleneck link to each flow for the target rates respectively First TMS_I using the Adequate-marking algorithm properly adjusted the probability of marking according to the changes of a network so it marked packets as much as reservation rates they desire to guarantee It also notified TMS_II called the edge-embedded marking module of the marking rates that will be used to mark aggregate flows at the edge of the diffserv network Next TMS_II at the edge of the network marked packets based on the marking rates of sources and allocated the bandwidth of bottleneck link to flows in view of proportional fairness called Marking rate-based fairness strategy if resources in the diffserv network are over-provisioned Consequently it mitigated the problem of unfairness on the per-aggregation marking and inefficient utilisation on per-flow marking We have simulated a TMS model to study the effects of several factors on the throughput rates of TCP flows in a RIO-based diffserv network The simulation results as expected showed that Two Markers System reduced TCP impacts over assured service and fairly allocated the bottleneck bandwidth among TCP flows

We have also described the analysis for the interaction of TCP and UDP flows in a diffserv network using the Two Markers System and simulated a TMS model to study reciprocal effects between TCP and UDP flows in a RIO-based diffserv network First in an over-provisioned network as TMS regulates the drop precedence TCP could fairly

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 66: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

52

share excess bandwidth that UDP almost dominated and achieve their target rates Next in an under-provisioned network although all the flows couldnrsquot achieve their target rates they fairly experienced degradation in proportion to their target bandwidth

In the near future we will study efficient scheme that TCP-source marking module informs edge-embedded marking module of the information of marking and the effect of the interaction of TCP and UDP flows according to variable parameters of RIO

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 67: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

53

REFERENCE

[1] W Feng D Kandlur D Saha andt K Shin Adaptive Packet Marking for Providing Differentiated Services in the Internet In Proc ICNPrsquo98 Austin TX Oct (1998) 108-117

[2] W Feng D Kandlur S Saha and K Shin Understanding TCP Dynamics in an Integrated Services Internet in Proc NOSSDAVrsquo97 St Louis MO May (1997) 279-290

[3] S Floyd and V Jacobson Like-sharing and Resource Management Models for Packet Networks IEEEACM Transactions on Networking vol 3 no 4 Aug (1995) 365-386

[4] J Heinanen F Baker W Weiss and J Wroclawski Assured Forwarding PHB Group RFC2597 June (1999)

[5] K Nichols S Blake F Baker and D Black Definition of the Differentiated Service Field (DS Field) in the Ipv4 and Ipv6 Headers RFC2474 Dec (1998)

[6] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Differentiated Services RFC2475 Dec (1998)

[7] V Jacobson and M Karels Congestion Avoidance and Control in Proc SIGCOMMrsquo88 Stanford CA Aug (1998) 314-329

[8] I Yeom and A L N Reddy Realizing Throughput Guarantees in a Differentiated Service Network in Proc ICMCSrsquo99 Florence Italy June (1999) 372-376

[9] N Seddigh B Nandy P Pieda Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network in Proc GLOBECOMrsquo99 Rio De Janeiro Dec (1999)

[10] C Gbaguidi H J Einsiedler P Hurley W Almesberger and J Hubaux A Survey of Differentiated Services Proposals for the Internet SSC Tech Report SSC1998020 httpsscwwwepflch May (1998)

[11] J Ibanez K Nichols Preliminary Simulation Evaluation of an Assured Service IETF Internet Draft draft-ibanez-diffserv-assured-eval-00txt Aug (1998)

[12] S H Lee S J Seok S J Lee S K Youm and C H Kang Two-differentiated Marking Strategies for TCP flows in a Differentiated Services Network In Proc QofISrsquo01 LNCS 2156 Coimbra Sept (2001) 207-221

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 68: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

54

[13] IBHdeA Alves JF de Rezende LFM de Moraes Evaluating Fairness in Aggregated Traffic Marking in Proc GLOBECOMrsquo00 vol1 San Francisco Nov (2000) 445-449

[14] N Seddigh B Nandy P Pieda Study of TCP and UDP Interaction for AF PHB IETF draft draft-nsbnpp-diffserv-udptcpaf-01pdf Aug (1999)

[15] S H Lee S J Seok S J Lee and C H Kang Study of TCP and UDP flows in a Differentiated Services Network In Proc MMNSrsquo01 LNCS 2216 Chicago Oct (2001) 198-203

[16] A Kolarov Study of the TCPUDP Fairness Issue for the Assured Forwarding Per Hop Behavior in Differentiated Services Networks in Proc HPSRrsquo01 Dallas May (2001) 190-196

[17] Y Bernet J Binder S Blake M Carlson B E Carpenter S Keshav E Davies B Ohlman Z Wang W Weiss A Framework for Differentiated Services IETF Internet Draft draft-ietf-diffserv-framework-02txt Feb (1999)

[18] S Blake D Black M Carlson E Davies Z Wang and W Weiss An Architecture for Aifferentiated Services RFC 2475 Aug (1998)

[19] M Vanhala Differentiated Services-architecture S-38128 Teletekniikan erikoistyo (1999)

[20] V Jacobson K Nichols and K Poduri An Expedited Forwarding PHB IETF Internet Draft draft-ietf-diffserv-phb-ef-01txt Nov (1998)

[21] V Jacobson Modified TCP Congestion Avoidance Algorithm end2end-interest mailing list (ftpftpeelblgovemailvanj90apr30txt) April (1990)

[22] V Paxson End-to-End Interent Packet Dynamics In Proc Of ACM SIGCOMM Sept (1997)

[23] M Mathis J Mahdavi S Flyod and A Romanow TCP Selective Acknowledgment Options RFC 2018 Oct (1996)

[24] S Flyod and T Henderson The NewReno Modification to TCPrsquos Fast Recovery Algorithm IETF Internet Draft draft-ietf-tcpimpl-newreno-02txt Feb (1999)

[25] L S Brakmo S W OrsquoMalley and L L Peterson TCP Vegas New Techniques for Congestion Detection and Avoidance In Proceedings of ACM SIGCOMM Oct (1994) 24-35

[26] S Floyd TCP and Explicit Congestion Notification Computer Communication

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48

Page 69: Two-differentiated Marking Strategies for TCP and UDP ...alumni.media.mit.edu/~starsu/Publications/J&C-10-Thesis'02-SHL.pdf · IPv4 Internet Protocol version 4, the current widely

55

Review Vol 24 No 5 Oct (1994) 10-23 [27] S Floyd and V Jacobson Random Early Detection Gateways for Congestion

Avoidance ACMIEEE Transactions on Networking Vol 1 No 4 Aug (1993) 397-413

[28] A Feroz A Rao and S Kalyanaraman TCP-friendly Traffic Marker for IP Differentiated Services in Proc IWQoSrsquo2000 Pittsburgh PA June (2000) 35-48