25
1 CHAPTER 1 INTRODUCTION 1.1 PREAMBLE The Internet carries high quality multimedia traffic along with other applications and services. Distributed multimedia consists of voice, video and data. Video conferencing, video on demand (VoD), distant learning and distributed games are some of its applications. Providing quality of service (QoS) for multimedia streaming has been a difficult and challenging problem. When networks carry all types of traffic it is important that the QoS constraints are met wherever necessary, or when the users are prepared to pay for premium services. As more and more users and organisations use the Internet for their multimedia applications, there is a significant rise in the research interest on QoS. Xiao and Ni (1999) provide an overview of QoS in the Internet. The various QoS parameters acceptable for applications are throughput, delay, delay variation (jitter), loss and error rates. The most prevailing factor in the degradation of service is the packet loss at the routers during congestion (Shyu et al 2003). There has been considerable amount of research work carried out to maximize throughput and minimize loss rates. One such approach is the use of active queue management at the IP routers to minimize loss. Another approach is the design of the IP routers to include a special scheduler to

CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

1

CHAPTER 1

INTRODUCTION

1.1 PREAMBLE

The Internet carries high quality multimedia traffic along with other

applications and services. Distributed multimedia consists of voice, video and

data. Video conferencing, video on demand (VoD), distant learning and

distributed games are some of its applications. Providing quality of service

(QoS) for multimedia streaming has been a difficult and challenging problem.

When networks carry all types of traffic it is important that the QoS

constraints are met wherever necessary, or when the users are prepared to pay

for premium services.

As more and more users and organisations use the Internet for their

multimedia applications, there is a significant rise in the research interest on

QoS. Xiao and Ni (1999) provide an overview of QoS in the Internet. The

various QoS parameters acceptable for applications are throughput, delay,

delay variation (jitter), loss and error rates. The most prevailing factor in the

degradation of service is the packet loss at the routers during congestion

(Shyu et al 2003).

There has been considerable amount of research work carried out to

maximize throughput and minimize loss rates. One such approach is the use

of active queue management at the IP routers to minimize loss. Another

approach is the design of the IP routers to include a special scheduler to

Page 2: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

2

reduce delay or jitter. This thesis proposes an integrated approach of a novel

active queue management (AQM) scheme and a new scheduler at the high

performance routers to support QoS. The AQM aims at increasing throughput

and reducing loss rate. The scheduler is designed to reduce jitter.

The Internet carries packets of various applications, with the basic

transport layer protocol of the Internet being the Transmission Control

Protocol (TCP). But due to the overhead such as retransmission and

acknowledgement techniques, TCP is not suitable for real time multimedia

flows. User Datagram Protocol (UDP) is the preferred protocol for these

applications (Chung and Claypool 2000). Hence alternative solutions are

needed to support distributed multimedia flows in the Internet. The QoS

constraints for these flows normally require throughput, delay and jitter as

important parameters. As the combination of throughput and one of the others

(packet loss, cost, delay, delay jitter) is NP complete (Wang and Crowcroft

1996) and jitter reduction is needed for the inelastic applications taken for this

work, jitter is taken as the second QoS parameter in this work.

This chapter provides a high-level overview of the IP networks and

the functioning of the IP routers. It describes the attempts made at providing

QoS and resource management to support distributed multimedia

applications. The chapter introduces traditional buffer management and

scheduling in these routers. It briefly presents the inspiration for AQM and

the scheduler. The various Internet models are described. The objectives of

this work and the motivation for a novel Internet QoS architecture are

explained. The primary contributions of this work also presented. The chapter

ends with a discussion on the organization of the rest of the thesis.

Page 3: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

3

1.2 OVERVIEW OF IP ROUTERS

This section discusses the arrangement and the functioning of

today’s Internet and the way the IP routers function. It introduces the

organization of an IP router along with its components and describes the

operation of an IP router.

1.2.1 IP Routers

The Internet is bound by the basic concepts of Internet Protocol

(IP), addressing and routing (Keshav 1997). Routers are network layer

devices used to interconnect different networks (Peterson and Davie 2001).

Their primary role is to switch packets from input links to output links. In

order to do so a router must be able to determine the path that every incoming

packet needs to follow and decide which outgoing link to select (Kurose and

Ross 2003). They must also deal with heterogeneous link technologies,

provide scheduling support for differential service and anticipate in complex

distributed algorithms to generate globally coherent routing tables (Keshav

and Sharma 1998). These demands, along with a voracious need for

bandwidth requirement by applications, challenge their design.

Routers are found at every level in the Internet. Primarily there are

three types of routers:

a. backbone routers

b. enterprise routers

c. access routers

Routers in access networks allow homes and small businesses to

connect to an Internet Service Provider (ISP). Routers in enterprise networks

Page 4: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

4

link thousands of computers within a campus or enterprise. Usually routers in

the backbone are not directly accessible to end-systems. Instead, they link

together ISPs and enterprise networks with long distance trunks.

1.2.1.1 Components of a router

Figure 1.1a abstracts the architecture of a generic router

(Tantawy 1994). A generic router has the basic functionalities that include

route processing, packet forwarding, traffic prioritization etc. A decentralized

router architecture has network interface that offers the processing power and

the buffer space needed for packet processing tasks related to the packets

flowing through it (Figure 1.1.b). Functional components process the

inbound, outbound traffic and time-critical port processing tasks such as

protocol functions that lie in the critical path of data flow and the QoS

processing functions. QoS guarantees are provided by classifying packets into

predefined service classes (Stallings 2002).

1.2.1.2 The IP packet processing steps

Figure 1.2 helps in understanding the packet processing done inside

a router. The IP packet processing steps are as follows (Keshav and

Sharma 1998) :

1. IP Header Validation: As a packet enters an input port (ingress

port), the forwarding logic verifies all Layer 3 information

(header length, packet length, protocol version, checksum, etc.).

2. Route Lookup and Header Processing: The router then performs

an IP address lookup using the packet’s destination address to

determine the output port (egress port) and performs all IP

forwarding operations (TTL decrement, header checksum, etc.).

Page 5: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

5

Figure 1.1 A generic switch-based distributed router architecture

Switch Fabric

Network Interface

Network Interface

Route Processor

(CPU)

Switch Fabric Interface

Inbound Processing

Local Processing Subsystem

s

Outbound Processing

Media-Specific Interface

a) Functional diagram

Post Processing

(Route Resolution

Logic)

Queue Manager

Memory

MAC PHY

Network Interface

b) Generic Architecture

Page 6: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

6

Shared Memory

System Controller

Ingress Port

Ingress Port

Egress Port

Egress Port

5

9

1 2 3 4 8

Packets

Packets

Figure 1.2 IP packet processing in shared memory router architecture

6

7

Packets

Packets

Page 7: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

7

3. Packet Classification: The forwarding engine examines

Layer 4 and higher layer packet attributes relative to any QoS

and access control policies.

4. With these attributes in hand, the router performs one or more

of the following parallel functions: associates the packet with

the suitable priority and the right egress port(s) , redirects the

packet to a different (overridden) destination (ICMP redirect),

drops the packet according to a congestion control policy

(e.g., RED), or a security policy and performs the appropriate

accounting functions (statistics collection, etc.).

5. The forwarding engine notifies the system controller about the

packet arrival.

6. The system controller allocates a memory location for the

arriving packet.

7. Once the packet has been passed to the shared memory, the

system controller signals the proper output port(s).

8. The output port(s) gets the packet from the known shared

memory location using any of the following algorithms:

eighted Fair Queueing (WFQ), Weighted Round-Robin

(WRR), Strict Priority (SP), etc.

9. When the packet is retrieved by the appropriate destination

outbound link(s) has retrieved the packet, it informs the

system controller and relinquishes the memory location for

new traffic.

Page 8: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

8

1.2.1.3 Input vs. output queued routers

The routers need buffers to store the packets before an output line

becomes available. These buffers may be placed at the input ports or at the

output ports. This section looks at the pros and cons of each choice.

In a purely input-queued router, packets are queued at the input

buffer. An arbiter guarantees access to the output links by scheduling these

packets. Hence there is no need for an output queue. The advantage of the

input-queued approach is that the speedup of the switch fabric can improve

the performance of the router. The disadvantage is that if First-In-First-Out

(FIFO) order is used to serve the queue, the packet at the head of the queue

may block other packets though their output lines are free. This is known as

head-of-line blocking. Many researchers have suggested different algorithms

for overcoming this problem.

A pure output-queued router buffers packets only at the outputs. It

uses any of the scheduling policies to send the packets through the out links.

The major advantage is that this approach does not suffer from the head-of-

blocking problem. But, if all the incoming packets are destined for the

same output link, then there is a need for input buffers, to avoid packet loss

(Keshav 1997). This gives rise to the hybrid approach of having input and

output buffers.

Thus, a combination of input buffered and output buffered switch is

required, i.e., CIOB (Combined Input and Output Buffered). The goal of most

designs, then, is to find the minimum speedup required to match the

performance of an output buffered switch using a CIOB and Virtual Output

Queues (VOQs).

Page 9: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

9

An input port provides several functions. It carries out datalink

layer encapsulation and decapsulation. It may also have the intelligence to

look up an incoming packet’s destination address in its forwarding table to

determine its destination port (this is also called route lookup). The algorithm

for route lookup can be implemented using custom hardware, or each line

card may be equipped with a general-purpose processor. In order to provide

QoS guarantees, a port may need to classify packets into predefined service

classes. Output ports store packets before they are transmitted on the output

link. They can implement sophisticated scheduling algorithms to support

priorities and guarantees. Like input ports, output ports also need to support

datalink layer encapsulation and decapsulation and a variety of higher-level

protocols (Keshav and Sharma 1998).

Input-queued and output-queued routers share the route lookup

bottleneck, but each of them has an additional performance bottleneck that the

other does not have. The output-queued switches must run the switch fabric at

a speed greater than the sum of the speeds of the incoming links. This also

requires storing packets rapidly in output buffers. One way to get around this

problem is to place all queueing at the input. Input-queuing is often criticized

because of the head-of-line (HoL) blocking problem: packets blocked at the

head of an input queue prevent schedulable packets deeper within the queue

from accessing the switch fabric (Karol et al 1987).

However, with this approach, an arbiter must resolve contention for

the switching fabric and for the output queue. It is hard to design arbiters that

run at high speeds and can also fairly schedule the switch fabric and the

output line (McKeown et al 1996). Another disadvantage of input queuing is

that packet scheduling algorithms for providing quality of service are usually

specified in terms of output queues. Each input port controller should imitate

the actions of the entire set of output port controllers. With the different link

Page 10: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

10

technologies, building a general-purpose input port controller is a challenging

task. Another problem with pure input-queued routers is that AQMs such as

Random Early Discard (Floyd and Jacobson 1993) depend on the length of

the output queue. With an input-queued switch, the output queue length is not

known. Due to these practical problems, it is significant that hybrid

approaches with both input and output queuing is a necessity for the next

generation networks.

1.2.1.4 Prioritization and Resource reservation

In this section, challenges in router design and the solutions in the

design of the next generation of IP routers are discussed. Flow identification

is a significant problem in routers. A flow is constituted by the set of packets

traveling through the Internet between a given source and a given destination.

A flow can result from the set of packets within a long-lasting TCP

connection or from the set of UDP packets in an audio or video session.

Optimization of usage of resources, such as buffers and cache entries is

sought. Therefore, it is necessary to identify flows on-the-fly. Flows that

require real-time QoS guarantees should be identified by matching incoming

packet headers with a set of pre-specified filters. Since classification is to be

done for each incoming packet, fast classification schemes are needed.

Since the Internet was designed for best-effort traffic, it has poor

support for resource reservations, even for the simple priority schemes. The

QoS requirements of the applications may demand support for resource

reservation in the routers. Resource reservation goes hand-in-hand with flow

classification, because resources are reserved on behalf of prespecified flows.

This coupling makes resource reservation an open problem. Even if there are

efficient flow classifiers, resource reservation additionally requires either

policing, so that the demand of an individual flow is limited, or some form of

Page 11: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

11

segregation in packet scheduling, so that over-limit flows are automatically

discouraged. Given the complexity of implementing Fair-Queueing type

scheduling algorithms at high speed, there has been much research work done

on efficient policers.

1.2.2 QoS Architecture

The general definition of QoS is “A defined level of performance in

a data communications system”. Network providers need performance metrics

that they can agree with their peers and with service providers buying

resources from them with certain performance guarantees. The following four

system performance metrics are considered the most important for end-to-end

QoS:

Throughput: Throughput is the effective data transfer rate

measured in bits per second. Sharing a network lowers the

throughput that can be realized by any user, due to the

overhead imposed by the extra bits included in every packet

for identification and other purposes. A minimum rate of

throughput may be required by an application.

Packet loss: When a network link is congested, packets queue

up in the buffers of the routers. If the link remains congested

for too long, the buffered queues will overflow and data will

be lost.

Delay: The time taken by data to travel from the source to the

destination is known as delay.

Jitter: The variation of delay is jitter. Jitter results due to

variations in queue length, variations in the processing time

Page 12: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

12

needed to reorder packets that arrived out of order because

they traveled over different paths and variations in the

processing time needed to reassemble packets that were

segmented by the source before being transmitted.

The important delays along the path are the nodal processing delay,

queuing delay, transmission delay and propagation delay. The most

complicated part of delay is the queuing delay. Many research papers and

books had been published on queuing delay (Kleinrock 1975; Bersekas and

Gallager 1992). When characterizing queuing delay, the average delay or

variance of delay (jitter) and the probability of the queuing delay exceeding

certain bound are explored (Kurose and Ross 2003).

1.2.2.1 Elastic and Inelastic traffic

Internet traffic may be divided into two broad categories, namely,

elastic and inelastic. Elastic traffic is one that can adjust to changes in delay

and throughput across the Internet. Inelastic traffic does not adapt to changes

in delay and throughput due to the nature of the applications. Many of the real

time multimedia applications fall under this category. Inelastic traffic

introduces two new requirements into the internet architecture. They are

preferential treatment to more demanding applications and support to elastic

applications in the presence of inelastic traffic (Stallings 2002).

1.2.2.2 QoS and router design

The QoS metrics can be controlled by the router design. There are

currently three main mechanisms to achieve a network performance that is

‘better than Best Effort’:

Page 13: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

13

Overprovision of capacity

Pre-reservation of resources

Prioritisation of certain services/users

In the access network, however, there is typically not much

installed fiber and therefore generally capacity is limited. Under these

circumstances, in order to support higher QoS than “Best Effort”, it is

necessary to be able to treat certain traffic differently than the rest, either by

specifically reserving resources (e.g. Integrated Services (IS)), or by

prioritising it (e.g Differentiated Services (DS)). The Internet Engineering

Task Force (IETF) has come up with DS, IS, and Resource Reservation

Protocol (RSVP) as beyond ‘Best Effort’ activities (Kurose and Ross 2003).

The network providers should achieve the service level agreement

(SLA) guarantees using the most cost-effective mechanisms. The IP router

design concentrates on the buffer management and the scheduling.

1.2.3 Buffer Management and QoS

The prioritization of mission critical applications and the support of

IP telephony and video conferencing create the requirement for supporting

QoS enforcement at the router. These applications are sensitive to both

absolute delay and delay jitter.

Beyond Best-Effort service, routers are beginning to offer a number

of QoS or priority classes. Priorities are used to indicate the preferential

treatment of one traffic class over another. The output buffered switch will

have multiple buffers at each output port and one buffer for each QoS traffic

class. The buffers may be physically separate or a physical buffer may be

divided logically into separate virtual buffers.

Page 14: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

14

Buffer management here refers to the discarding policy for the

input of packets into the buffers (e.g., Drop Tail, Drop-From-Front, Random

Early Detection (RED), etc.) and the scheduling policy for the output of

packets from the buffers (e.g., strict priority, weighted round-robin (WRR),

weighted fair queueing (WFQ), etc.). Buffer management in the IP router

involves both dimensions of time (packet scheduling) and buffer space

(packet discarding). The IP traffic classes are distinguished in the time and

space dimensions by their packet delay and packet loss priorities. Therefore

buffer management and QoS support is an integral part of the switch fabric

design (Keshav and Sharma 1998).

1.2.4 Active Queue Management

Scheduling and AQM are the two ways to support QoS in IP

routers. In traditional implementations of router queue management, the

packets are dropped when a buffer becomes full, in which case the

mechanism is called Drop-Tail. Internet routers can improve application

goodput and response times by detecting congestion early and improving

fairness among flows. This is implemented in the routers by dropping packets

before a buffer becomes full, so that the senders can respond to the congestion

before the actual buffers overflow. Such a proactive approach is known as

Active Queue Management (AQM). Many AQMs have been proposed

including the most popular RED (Floyd and Jacobson, 1993). RED had been

recommended by IETF at the IP routers (Zheng and Atiquzzaman 2002). Due

to the characteristics of RED, TCP flows are benefited and UDP flows are

punished.

1.2.5 Scheduler

A scheduler at the IP routers decides which packet to send next

(Keshav 1997). If the packets arriving at all the input ports of a router wish to

Page 15: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

15

leave from the same output port and if the output trunk speed is the same as

the input trunk speed, only one of these packets can be transmitted in the time

it takes for all of them to arrive at the output port. In order to prevent packet

loss, the output port provides buffers to store excess arriving packets and

serves packets from the buffer as and when the output trunk is free. The

obvious way to serve packets from the buffer is in the order they arrived at the

buffer, that is, in first-come-first-served (FCFS), or, FIFO order. FCFS

service is trivial to implement, requiring the router or switch to store only a

single head and tail pointer per output trunk. However, this solution has its

problems, because it does not allow the router to give some sources a lower

delay than others, or prevent a malicious source, that sends an unending

stream of packets as fast as it can. This may cause the other well-behaved

streams to loose packets. An alternative service method called Fair Queuing

solves these problems, albeit at a greater implementation cost (Demers et al,

1990). In the Fair Queuing approach, each source sharing a bottleneck link is

allocated an ideal rate of service at that link. Specifically, focusing only on

the sources that are backlogged at the link at a given instant in time, the

available service rate of the trunk is partitioned in accordance with a set of

weights. Fair Queuing and its variants are mechanisms that serve packets

from the output queue to approximately partition the trunk service rate in this

manner.

All versions of Fair Queuing require packets to be served in an

order different from the one in which they arrived. Consequently, Fair

Queuing is more expensive to implement than FCFS, since it must decide the

order in which to serve incoming packets and then manage the queues in

order to carry this out. When the traffic intensity is high, it is expensive to

implement Fair Queuing since Fair Queuing requires some form of per

conversation state to be stored on the routers. Fair Queuing has three

important and useful properties. First, it provides protection, so that a well-

behaved source does not see packet losses due to misbehavior by other

Page 16: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

16

sources. Second, by design, it provides fair bandwidth allocation. If the sum

of weights of the sources is bounded, each source is guaranteed a minimum

share of link capacity. Finally, it can be shown that if a source is leaky-bucket

regulated, independent of the behavior of the other sources, it receives a

bound on its worst-case end-to-end delay. For these reasons, almost all

current routers support some variant of Fair Queueing.

A related scheduling problem has to do with the partitioning of link

capacity among different classes of users. It has been shown that extensions

of Fair Queueing are compatible with hierarchical link-sharing requirements

(Bennett and Zhang 1996; Goyal and Vin 1997). Fast implementations of

algorithms that provide both hierarchical link sharing and per-connection QoS

guarantees are an area of active research (Bennett and Zhang 1997). All

future routers are expected to provide some form of Fair Queueing at output

queues.

1.2.6 Internet Models

The Internet, as originally conceived, offers only a very simple

QoS, point-to-point best-effort data delivery. Before real-time applications

such as remote video, multimedia conferencing, visualization and virtual

reality can be broadly used, the Internet infrastructure must be modified to

support real-time QoS, which provides some control over end-to-end packet

delays. This extension must be designed from the beginning for multicasting;

simply generalizing from the unicast case does not work. The fundamental

service model of the Internet, as embodied in the best-effort delivery service

of IP, has been unchanged since the beginning of the Internet research project

three decades ago (Cerf and Kahn 1974).

Page 17: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

17

Real-time QoS is not the only issue for a next generation of traffic

management in the Internet. Network operators are requesting the ability to

control the sharing of bandwidth on a particular link among different traffic

classes. They want to be able to divide traffic into a few administrative classes

and assign to each a minimum percentage of the link bandwidth under

conditions of overload, while allowing "unused" bandwidth to be available at

other times. These classes may represent different user groups or different

protocol families, for example. Such a management facility is commonly

called controlled link-sharing. IS (also known as IntServ) is an Internet

service model that includes best-effort service, real-time service and

controlled link sharing. IntServ relies on per-flow admission control, policing

and scheduling and it is not scalable.

RSVP is a resource reservation setup protocol designed for an

integrated services Internet. The RSVP protocol is used by a host to request

specific QoS from the network for particular application data streams or

flows. RSVP is also used by routers to deliver QoS requests to all nodes

along the path(s) of the flows and to establish and maintain state to provide

the requested service. RSVP requests result in resources being reserved in

each node along the data path. Some researchers concluded that there is an

inescapable requirement for routers to be able to reserve resources, in order to

provide special QoS for specific user packet streams, or "flows".

The DS (also known as DiffServ) Internet model differed from the

above approach. The motivation factors for Diffserv were scalability,

aggregation and high resource utilization. But, it heavily relies on network-

wide Service Level Agreement (SLA) monitoring and tactical and strategical

capacity planning. DiffServ and other class-based schemes (Parris et al 1999;

Chung and Claypool, 2000) offer differentiated service to incoming traffic.

Page 18: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

18

They require complex mechanisms and need many network components such

as markers, traffic shapers etc.

Alternate Best Effort (ABE) (Hurley et al, 2001) provides an AQM

where throughput is forfeited for delay-sensitive traffic. Since no degrees of

sensitivity provided (delay- or throughput- sensitive), ABE is not flexible.

1.3 OBJECTIVES

The objective of the new Internet Service QoS architecture

combines the per-flow model of the IntServ and the per-hop model of the

DiffServ architectures. The flow of a favored multimedia is differentiated at

the IP router and given preferential treatment. The DS code point (DSCP) is

used to mark packets to select a favored packet. This is essentially a field in

the Type of Service (ToS) byte of the IPv4, or the DS field of IPv6 header.

The packet classification function is part of the AQM.

The QoS architecture is designed for inelastic flows that need QoS

support for delay jitter and packet loss.

1.4 MOTIVATION

The IP router functions with a simple logic. When the packets

arrive at the router, if the output buffer is empty, it may be just forwarded to

the destination through an output link. This link is selected by referring to the

forwarding table at the IP router. When the buffers are full, the packet may

even be dropped due to the Best Effort nature of the Internet. Hence there is a

need for QoS architecture at the IP routers to support distributed multimedia

applications that require good QoS. Some important considerations for the

router design are throughput, packet loss, packet delays, amount of buffering

Page 19: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

19

and complexity of implementation. For given input traffic, the router designs

aim to maximize throughput and minimize packet delays and losses. In

addition, the total amount of buffering should be minimal (to sustain the

desired throughput without incurring excessive delays) and implementation

should be simple.

As the Internet is unfriendly to stream traffic generated by voice

and video (Parris et al 1999), it is needed to expand the IP designs for the next

generation networks. The IP routers should be designed in such a way to

encompass necessary intelligence to recognize, control and prioritize different

types of traffic. The arriving traffic should be differentiated and the individual

QoS requirements should be satisfied. Especially, in distributed systems the

performance depends on the structure of the communication network (Fischer

and Merritt 2003). Research has been done to improve performance by

improving the bandwidth requirements and reducing jitter. The AQM

approach, while reducing congestion, also improves the performance of the

applications. The research community has proposed various AQM techniques

out of which RED has become the de-facto standard after the IETF

recommended it for the IP routers.

The traditional Tail-Drop algorithm drops packets only if there are

buffer overflows. It is easy to implement (Keshav 1997) but may result in

longer queuing delay for the packets. RED avoids congestion by dropping

packets randomly before the buffer overflows. Though this scheme reduces

the average delay and helps avoid congestion, the low pass filter algorithm,

which is used to calculate the average queue length in RED, results in poor

response time when RED recovers from congestion (Zheng and Atiquzzaman

2002). RED can reduce the queuing delays at the routers and hence the

end-to-end delay, but increases the jitter of non-bursty streams (Bonald et al

2000).

Page 20: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

20

Suggestions have been put forth by many researchers, for AQMs

which do not strictly use average queue lengths (Ott et al 1999; Athuraliya

et al 2001; Hollot et al 2001b; Kunniyur and Srikant 2004). Many of them

require internet-like configurations and especially Explicit Congestion

Notification (ECN) marking (Floyd, 1994) which they recommend. Some are

difficult to configure and some are complex to implement. Many studies have

debated whether dropping/marking should be based on queue length or on

input and output rates (or alternatively the queue length slope over time). The

objective is to keep the average queuing delay under a specified target, thus

reducing web response time, without significantly affecting application

throughput and link utilisation.

Other weaknesses of RED have been reported and several

approaches to overcome them have been proposed. When a mixture of the

various traffic types shares a link, RED allows unfair bandwidths. This AQM

uses per-flow soft states with instantaneous buffer size monitoring.

RED parameters can be tuned to provide either high utilisation or

low queuing delay but not both (Lapsley and Low 1999; Athurliya et al 2001).

Certain researchers use a class of algorithms, which also includes the PI

controller, (Hollot et al 2001b) that is mainly designed for marking (ECN)

and has not been adequately studied without ECN.

Hence there arises a need to achieve a simple AQM algorithm that

can resolve the QoS issues without compromising the utilization factor. In

addition a scheduler policy is needed at the output queues to guarantee jitter.

The novel Gentle- Flow-based Proactive Queuing (GFPQ) algorithm drops

packets from the buffer proactively with preferential treatment to prioritized

multimedia packets. The Jitter Guaranteed Time-stamp Scheduler (JGTS)

scheduler guarantees jitter to an upper bound with the help of time stamps.

Page 21: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

21

This also led to the idea of a new, simple service model for the Internet to

offer ‘better than Best Effort’ service. The two common service models IS

and DS have been deployed to a significant extent in the Internet.

New research works have been progressing in other directions as

well to introduce new service models. Hurley et al (2001) discuss the ABE

Model. Scavenger Networks introduce a model of marking voluntarily some

of the packets as low-priority ones. QBone Scavenger Service (QBSS) can

expand to consume unused capacity. Users (or their applications) voluntarily

mark some traffic for scavenger treatment by setting Differentiated Services

Code Point (DSCP) in the IP packet headers to binary 001000. Routers put

this traffic into a special queue with very small allocated capacity using a

queuing discipline such as Weighted Round-Robin (WRR), Modified Deficit

Round-Robin (MDRR), Weighted Fair Queuing (WFQ), or a similar scheme.

This thesis indicates that the integrated design of the novel AQM

and the new scheduler can be used for the new service model. Here, the two

types of traffic, namely favored and the non-favored are treated differently at

the IP routers, but still ensures inter-protocol fairness involving all types of

traffic such as TCP, UDP, and TCP-FTP.

1.5 CONTRIBUTIONS

The specific contributions made in this thesis are in two areas,

namely design and analysis of a novel AQM and design of a new scheduler.

From these, a novel Internet Service model with a QoS architecture is

designed. An overview of the contributions made is given below.

Page 22: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

22

1.5.1 AQM Issues

GFPQ framework is built on the ‘queue based’ approach. A

novel AQM algorithm at the input queues of the IP router is

evolved. It is designed in such a way that when a favored

multimedia packet arrives, it is at best accommodated into the

queue. The QoS for such flows is supported by reducing their

blocking probability (drop rate). The other types of traffic are

considered ‘non-favored’. Yet, the throughput is ensured for

all types of packets. This leads to Inter-Protocol fairness and

ensures that favoring a single class does not affect

significantly the throughput of others. This leads to not only

improvement in throughput for favored multimedia UDP

flows, but also for other types of traffic. Thus GFPQ with

packet classification supports QoS, namely loss rate.

GFPQ is an AQM that is based on instantaneous queue

monitoring as against average queue length used in RED and

the first version, FPQ. This helps in reducing the average

queuing delay at the router.

GFPQ is stateless and easy to implement. The design and

implementation methodologies are presented in this work.

Performance analysis of the queuing model for GFPQ is

presented.

Qualitative analysis of GFPQ is presented. The analytical

modeling of the pushout policies of GFPQ supported by

quantitative analysis through simulation is also presented.

This framework is also generic in the sense that any mix of

traffic can be used with cross-over traffic.

Page 23: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

23

However, it is recognized that the present and next-generation

multimedia applications have a jitter problem. As jitter is an important QoS

parameter, it becomes necessary to address this issue. Therefore, the QoS

architecture concentrates on reducing jitter with the help of a novel scheduler

at the output queues of the routers.

1.5.2 Jitter reduction by scheduling

A novel scheduling technique JGTS is designed. The JGTS

incorporates a jitter manager module whose function is to

reduce jitter for the multimedia flows.

The time complexity analysis of JGTS is presented.

Scalability issues are also addressed at the scheduler with

throughput and jitter characteristics. The effectiveness of this

technique is analyzed in a standalone mode using simulation

and the results are presented.

The unified scheme with GFPQ and JGTS is tested for

scalability and its performance is found to be better than the

de-facto RED technique.

1.5.3 QoS Architecture

A novel QoS architecture with GFPQ and scheduler JGTS is

proposed to support multimedia flows at the network routers.

Enhanced QoS for loss reduction and jitter reduction at the

input and output queues, respectively, is provided.

The unified framework is found to increase the overall

throughput for the favored multimedia applications.

Page 24: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

24

This QoS architecture leads to a novel Internet service model

to provide QoS for high-performance applications. The

performance enhancement at existing IP routers, without

change in the infrastructure is proposed.

1.6 ORGANIZATION OF THE THESIS

This thesis consists of six chapters. Chapter 1 introduces the

Internet models and describes the IP router organization. It also gives the

conventional queue management and scheduling techniques deployed in the

best effort Internet.

Chapter 2 presents the state of the art of the resource management

techniques and scheduling in the traditional and derived Internet. It brings out

the related research works done in the areas of AQM, scheduling the Internet

service models. The buffer management techniques employed in IP routers

supported by schedulers at the output queues are discussed.

Chapter 3 discusses the two approaches used in this work for

providing quality of service in IP routers. It starts with the block diagram of

the integrated system showing the buffer management and scheduler. The

GFPQ framework used for the active queue management at the input queues

is presented here. The novel scheduler design (JGTS) and deployment

techniques are discussed.

Chapter 4 explains the AQM and scheduler algorithms and their

analysis. This discusses the introduction of the FPQ based AQM to make the

framework help in getting the first objective namely, the loss rate reduction.

The GFPQ algorithm that makes effective use of the available buffer space is

Page 25: CHAPTER 1 INTRODUCTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/27654/6/06...1996) and jitter reduction is needed for the inelastic applications taken for this work,

25

then introduced. This is the second version of the AQM proposed in this

work. The queuing model and the analysis of the AQM are then discussed.

Design objectives for this proposed AQM are also provided. The random

replacement algorithm in the AQM and the pushout policies are discussed.

The design of the novel scheduler JGTS for deployment at the output queue

and its algorithm are presented.

The results are discussed in chapter 5. The performance evaluation

of GFPQ and JGTS are presented. The effects of the combined approach are

also discussed. Simulation results showing the effectiveness of these

techniques are provided. The comparison of results with the various

traditional techniques are provided and analyzed, with respect to throughput,

delay and delay jitter. The importance of the integrated approach is

emphasized with the results showing jitter and loss rate reduction.

Chapter 6 discusses the most significant contributions of this thesis

and possible future research directions.