57
Chapter 7 Packet-Switching Networks Network Services and Internal Network Operation Packet Network Topology Datagrams and Virtual Circuits Routing in Packet Networks Shortest Path Routing ATM Networks Traffic Management

Lecture7b-ATM Networks.ppt

Embed Size (px)

Citation preview

  • Chapter 7Packet-Switching NetworksNetwork Services and Internal Network OperationPacket Network TopologyDatagrams and Virtual CircuitsRouting in Packet NetworksShortest Path RoutingATM NetworksTraffic Management

  • Chapter 7Packet-Switching NetworksATM Networks

  • Asynchronous Tranfer Mode (ATM)Packet multiplexing and switchingFixed-length packets: cellsConnection-orientedRich Quality of Service supportConceived as end-to-endSupporting wide range of servicesReal time voice and videoCircuit emulation for digital transportData traffic with bandwidth guaranteesDetailed discussion in Chapter 9

  • ATM NetworkingEnd-to-end information transport using cells53-byte cell provide low delay and fine multiplexing granularityATMAdaptationLayerATMAdaptationLayerATM NetworkVideoPacketVoiceVideoPacketVoiceSupport for many services through ATM Adaptation Layer

  • TDM vs. Packet Multiplexing**In mid-1980s, packet processing mainly in software and hence slow; By late 1990s, very high speed packet processing possible

  • ATM: Attributes of TDM & Packet Switching

    Packet structure gives flexibility & efficiency

    Synchronous slot transmission gives high speed & densityPacket Header

  • ATM SwitchingSwitch carries out table translation and routingATM switches can be implemented using shared memory,shared backplanes, or self-routing multi-stage fabrics

  • ATM Virtual ConnectionsVirtual connections setup across networkConnections identified by locally-defined tags ATM Header contains virtual connection information: 8-bit Virtual Path Identifier 16-bit Virtual Channel IdentifierPowerful traffic grooming capabilitiesMultiple VCs can be bundled within a VP Similar to tributaries with SONET, except variable bit rates possible

    Physical linkVirtual pathsVirtual channels

  • VPI/VCI switching & multiplexingConnections a,b,c bundled into VP at switch 1Crossconnect switches VP without looking at VCIsVP unbundled at switch 2; VC switching thereafterVPI/VCI structure allows creation virtual networks

  • MPLS & ATMATM initially touted as more scalable than packet switchingATM envisioned speeds of 150-600 MbpsAdvances in optical transmission proved ATM to be the less scalable: @ 10 GbpsSegmentation & reassembly of messages & streams into 48-byte cell payloads difficult & inefficientHeader must be processed every 53 bytes vs. 500 bytes on average for packetsDelay due to 1250 byte packet at 10 Gbps = 1 msec; delay due to 53 byte cell @ 150 Mbps 3 msec MPLS (Chapter 10) uses tags to transfer packets across virtual circuits in Internet

  • Chapter 7Packet-Switching NetworksTraffic Management Packet LevelFlow LevelFlow-Aggregate Level

  • Traffic Management Vehicular traffic managementTraffic lights & signals control flow of traffic in city street systemObjective is to maximize flow with tolerable delaysPriority ServicesPolice sirensCavalcade for dignitariesBus & High-usage lanesTrucks allowed only at nightPacket traffic managementMultiplexing & access mechanisms to control flow of packet trafficObjective is make efficient use of network resources & deliver QoSPriorityFault-recovery packetsReal-time trafficEnterprise (high-revenue) trafficHigh bandwidth traffic

  • Time Scales & GranularitiesPacket LevelQueueing & scheduling at multiplexing pointsDetermines relative performance offered to packets over a short time scale (microseconds) Flow LevelManagement of traffic flows & resource allocation to ensure delivery of QoS (milliseconds to seconds)Matching traffic flows to resources available; congestion controlFlow-Aggregate LevelRouting of aggregate traffic flows across the network for efficient utilization of resources and meeting of service levelsTraffic Engineering, at scale of minutes to days

  • End-to-End QoSA packet traversing network encounters delay and possible loss at various multiplexing pointsEnd-to-end performance is accumulation of per-hop performances

  • Scheduling & QoSEnd-to-End QoS & Resource ControlBuffer & bandwidth control PerformanceAdmission control to regulate traffic levelScheduling Conceptsfairness/isolationpriority, aggregation, Fair Queueing & VariationsWFQ, PGPSGuaranteed Service WFQ, Rate-controlPacket Droppingaggregation, drop priorities

  • FIFO QueueingAll packet flows share the same bufferTransmission Discipline: First-In, First-OutBuffering Discipline: Discard arriving packets if buffer is full (Alternative: random discard; pushout head-of-line, i.e. oldest, packet)

  • FIFO QueueingCannot provide differential QoS to different packet flowsDifferent packet flows interact stronglyStatistical delay guarantees via load controlRestrict number of flows allowed (connection admission control)Difficult to determine performance deliveredFinite buffer determines a maximum possible delayBuffer size determines loss probabilityBut depends on arrival & packet length statisticsVariation: packet enqueueing based on queue thresholdssome packet flows encounter blocking before othershigher loss, lower delay

  • FIFO Queueing with Discard Priority

  • HOL Priority QueueingHigh priority queue serviced until emptyHigh priority queue has lower waiting timeBuffers can be dimensioned for different loss probabilitiesSurge in high priority queue can cause low priority queue to saturate

  • HOL Priority FeaturesProvides differential QoSPre-emptive priority: lower classes invisibleNon-preemptive priority: lower classes impact higher classes through residual service timesHigh-priority classes can hog all of the bandwidth & starve lower priority classesNeed to provide some isolation between classes

    (Note: Need labeling)

  • Earliest Due Date SchedulingQueue in order of due datepackets requiring low delay get earlier due datepackets without delay get indefinite or very long due dates

  • Fair Queueing / Generalized Processor SharingEach flow has its own logical queue: prevents hogging; allows differential loss probabilitiesC bits/sec allocated equally among non-empty queuestransmission rate = C / n(t), where n(t)=# non-empty queuesIdealized system assumes fluid flow from queuesImplementation requires approximation: simulate fluid system; sort packets according to completion time in ideal system

  • Buffer 1at t=0Buffer 2at t=01t12Fluid-flow system:packet from buffer 1served at rate 1/4;

    Packet from buffer 1 served at rate 1Packet from buffer 2served at rate 3/401t12Packet from buffer 1 served at rate 1Packet frombuffer 2 served at rate 1Packet frombuffer 1 waiting0Packet-by-packet weighted fair queueing:buffer 2 served first at rate 1;then buffer 1 served at rate 1

  • Packetized GPS/WFQCompute packet completion time in ideal systemadd tag to packetsort packet in queue according to tagserve according to HOL

  • Bit-by-Bit Fair QueueingAssume n flows, n queues1 round = 1 cycle serving all n queuesIf each queue gets 1 bit per cycle, then 1 round = # active queuesRound number = number of cycles of service that have been completedIf packet arrives to idle queue:Finishing time = round number + packet size in bitsIf packet arrives to active queue: Finishing time = finishing time of last packet in queue + packet size

  • Differential Service: If a traffic flow is to receive twice as much bandwidth as a regular flow, then its packet completion time would be half

  • Computing the Finishing TimeF(i,k,t) = finish time of kth packet that arrives at time t to flow i P(i,k,t) = size of kth packet that arrives at time t to flow iR(t) = round number at time tFair Queueing:F(i,k,t) = max{F(i,k-1,t), R(t)} + P(i,k,t)Weighted Fair Queueing: F(i,k,t) = max{F(i,k-1,t), R(t)} + P(i,k,t)/wiGeneralize so R(t) continuous, not discreteR(t) grows at rate inverselyproportional to n(t)

  • WFQ and Packet QoSWFQ and its many variations form the basis for providing QoS in packet networksVery high-speed implementations available, up to 10 Gbps and possibly higherWFQ must be combined with other mechanisms to provide end-to-end QoS (next section)

  • Buffer ManagementPacket drop strategy: Which packet to drop when buffers fullFairness: protect behaving sources from misbehaving sourcesAggregation: Per-flow buffers protect flows from misbehaving flowsFull aggregation provides no protectionAggregation into classes provided intermediate protectionDrop priorities: Drop packets from buffer according to prioritiesMaximizes network utilization & application QoSExamples: layered video, policing at network edgeControlling sources at the edge

  • Early or Overloaded Drop

    Random early detection:drop pkts if short-term avg of queue exceeds thresholdpkt drop probability increases linearly with queue lengthmark offending pkts improves performance of cooperating TCP sourcesincreases loss probability of misbehaving sources

  • Random Early Detection (RED)Packets produced by TCP will reduce input rate in response to network congestionEarly drop: discard packets before buffers are fullRandom drop causes some sources to reduce rate before others, causing gradual reduction in aggregate input rate

    Algorithm:Maintain running average of queue lengthIf Qavg < minthreshold, do nothingIf Qavg > maxthreshold, drop packetIf in between, drop packet according to probabilityFlows that send more packets are more likely to have packets dropped

  • Packet Drop Profile in RED

  • Chapter 7Packet-Switching NetworksTraffic Management at the Flow Level

  • Congestion occurs when a surge of traffic overloads network resourcesApproaches to Congestion Control: Preventive Approaches: Scheduling & Reservations Reactive Approaches: Detect & Throttle/Discard

  • Ideal effect of congestion control: Resources used efficiently up to capacity available

  • Open-Loop ControlNetwork performance is guaranteed to all traffic flows that have been admitted into the networkInitially for connection-oriented networksKey MechanismsAdmission ControlPolicingTraffic ShapingTraffic Scheduling

  • Admission ControlFlows negotiate contract with networkSpecify requirements:Peak, Avg., Min Bit rateMaximum burst sizeDelay, Loss requirement Network computes resources neededEffective bandwidthIf flow accepted, network allocates resources to ensure QoS delivered as long as source conforms to contractTypical bit rate demanded by a variable bit rate information source

  • PolicingNetwork monitors traffic flows continuously to ensure they meet their traffic contractWhen a packet violates the contract, network can discard or tag the packet giving it lower priorityIf congestion occurs, tagged packets are discarded firstLeaky Bucket Algorithm is the most commonly used policing mechanismBucket has specified leak rate for average contracted rateBucket has specified depth to accommodate variations in arrival rateArriving packet is conforming if it does not result in overflow

  • Leaky Bucket algorithm can be used to police arrival rate of a packet streamLet X = bucket content at last conforming packet arrivalLet ta last conforming packet arrival time = depletion in bucket

  • Leaky Bucket AlgorithmDepletion rate: 1 packet per unit time

    L+I = Bucket Depth

    I = increment per arrival, nominal interarrival timeInterarrival timeCurrent bucketcontentarriving packetwould cause overflowemptyNon-emptyconforming packet

  • Leaky Bucket ExampleI = 4 L = 6Non-conforming packets not allowed into bucket & hence not included in calculations

  • Policing ParametersT = 1 / peak rateMBS = maximum burst sizeI = nominal interarrival time = 1 / sustainable rate

  • Dual Leaky BucketDual leaky bucket to police PCR, SCR, and MBS:

  • Traffic ShapingNetworks police the incoming traffic flowTraffic shaping is used to ensure that a packet stream conforms to specific parametersNetworks can shape their traffic prior to passing it to another network

  • Leaky Bucket Traffic ShaperBuffer incoming packetsPlay out periodically to conform to parametersSurges in arrivals are buffered & smoothed outPossible packet loss due to buffer overflowToo restrictive, since conforming traffic does not need to be completely smooth

  • Token Bucket Traffic ShaperToken rate regulates transfer of packetsIf sufficient tokens available, packets enter network without delayK determines how much burstiness allowed into the networkAn incoming packet must have sufficient tokens before admission into the network

  • Token Bucket Shaping EffectThe token bucket constrains the traffic from a source to be limited to b + r t bits in an interval of length tb + r t

  • Packet transfer with Delay GuaranteesToken ShaperBit rate > R > re.g., using WFQAssume fluid flow for informationToken bucket allows burst of b bytes 1 & then r bytes/secondSince R>r, buffer content @ 1 never greater than b byteThus delay @ mux < b/RRate into second mux is r
  • Delay Bounds with WFQ / PGPSAssume traffic shaped to parameters b & rschedulers give flow at least rate R>r H hop pathm is maximum packet size for the given flowM maximum packet size in the networkRj transmission rate in jth hopMaximum end-to-end delay that can be experienced by a packet from flow i is:

  • Scheduling for Guaranteed ServiceSuppose guaranteed bounds on end-to-end delay across the network are to be providedA call admission control procedure is required to allocate resources & set schedulersTraffic flows from sources must be shaped/regulated so that they do not exceed their allocated resourcesStrict delay bounds can be met

  • Current View of Router Function

  • Closed-Loop Flow ControlCongestion controlfeedback information to regulate flow from sources into networkBased on buffer content, link utilization, etc.Examples: TCP at transport layer; congestion control at ATM levelEnd-to-end vs. Hop-by-hopDelay in effecting controlImplicit vs. Explicit FeedbackSource deduces congestion from observed behaviorRouters/switches generate messages alerting to congestion

  • End-to-End vs. Hop-by-Hop Congestion Control

  • Traffic EngineeringManagement exerted at flow aggregate levelDistribution of flows in network to achieve efficient utilization of resources (bandwidth)Shortest path algorithm to route a given flow not enoughDoes not take into account requirements of a flow, e.g. bandwidth requirementDoes not take account interplay between different flowsMust take into account aggregate demand from all flows

  • Shortest path routing congests link 4 to 8Better flow allocation distributes flows more uniformly