60
Scheduling Algorithms for Super 3G JEAN-CHRISTOPHE LANERI Master’s Degree Project Stockholm, Sweden 2006

Scheduling Algorithms for Super 3G - Jean-Christophe - Master Thesis Report.pdf · Work has started in the 3GPP to define a long term evolution for 3G, sometimes referred as Super

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

  • Scheduling Algorithms for Super 3G

    JEAN-CHRISTOPHE LANERI

    Master’s Degree ProjectStockholm, Sweden 2006

  • Scheduling Algorithms for Super 3G

    JEAN-CHRISTOPHE LANERI

    Master’s Degree Project

    March 2006

    TRITA–S3–RST–XXXXXXISSN 1400–9137

    ISRN KTH/RST/R--XXX/XXX--SE

    Radio Communication Systems LaboratoryDepartment of Signals, Sensors and Systems

  • À mes parents,pour m’avoir laissé développer mes propres idées,

    pour avoir toujours été présents.

  • This page intentionally contains only this sentence.

  • Abstract

    Work has started in the 3GPP to define a long term evolution for 3G, sometimes referred as Super 3G(S3G). In such a scenario, operators would be able to deliver mobile broadband IP-based services atdata rates comparable to those of wired services, such as DSL. This project deals with scheduling onthe S3G forward link. It is assumed that the access network technology is OFDM and that only sharedchannels are used. We investigate some scheduling algorithms in a multiuser OFDMA environment. TheQuality of Service (QoS) concept proposed for S3G is a realization of DiffServ for 3GPP access networks,where each QoS class is associated with a policy profile. This policy profile determines the division ofthe available bandwidth to the different QoS classes. This work aims at designing a S3G schedulerwithin this DiffServ context. Realistic traffic models are investigated: file transfer, web-browsing, VoIP.A special emphasis is put on the tradeoff between network capacity and user satisfaction. Complexityrequirements are also taken into account. Results show that the associated DiffServ policies can beenforced and hence provide an effective way of dividing bandwidth between the QoS classes. We furthershow possible benefits of having service-dependent scheduling algorithms. For example, we argue forhaving a radio-oriented algorithm for background downloads, a fair algorithm for interactive servicesand a delay-aware algorithm for the conversational scenarios.

  • This page intentionally contains only this sentence.

  • Acknowledgment

    This work has been carried out at the Radio Interface Architecture section within the Wireless AccessNetworks department at Ericsson Research. I thank my advisor Hannes Ekström for giving me thisopportunity, and letting me develop my own ideas. Working with you has been remarkably easy, thankyou for that!

    The simulator I have been using is being developed by Niclas Wiberg and Henning Wiemann team.Thank you to both of you, for your time and efforts in making everything as simple as possible. Thiscollaboration with the people from Linköping could not have been possible without Johan Lundsjö,manager of the Radio Interface Architecture section. I express my sincere salutations to him.

    Because a six months project also depends of the people we spend time with at work, I thankGustavo Azzolin and Nicolas Debernardi, master thesis students at Ericsson Research. It would nothave been the same without you.

    On a broader perspective, this thesis concludes the International Master Program in WirelessSystems at the Royal Institute of Technology (KTH) I have been enrolled in since september 2004.During this time, I met fantastic friends with whom I hope I will keep in touch as often as possible. Soto you Malek, Bogdan, Markus & Angie , Wissam, Sha and Adrien, thank you!

    This work also concludes my engineering degree at l’École Supérieure d’Informatique ÉlectroniqueAutomatique in Paris. My time there has been a lot of fun, thanks in particular to Nicolas, Franck,Philippe, Antoine, Paul J., Mathieu, Joan but also Paul V. and Wandrille. In addition, I would liketo thank ESIEA teachers and administrators who helped me in the different projects I have beendoing: Pierre Aliphat, Catherine Dorignac, Peter Wilson, and especially Robert Erra, Sophie Maucorps,Stéphane Duval and Laurent Beaudoin.

    This last thought is dedicated to you my love. Thank you for giving me this unbreakable feeling.Time flies with you.

  • This page intentionally contains only this sentence.

  • Contents

    Abstract v

    Acknowledgment vii

    Table of Contents x

    List of Figures xi

    List of Abbreviations xiii

    1 Introduction 1

    2 System Overview 32.1 Super 3G Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.3 Link Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.4 Physical Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2.4.1 Downlink: OFDM with Frequency Adaptation . . . . . . . . . . . . . . . . . . . . 52.4.2 Uplink: Single-Carrier FDMA with Dynamic Bandwidth . . . . . . . . . . . . . . . 6

    3 Context, Problem Definition and Performance Measures 73.1 Service Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    3.1.1 Super 3G Flows and Allocation of Cell Bandwidth . . . . . . . . . . . . . . . . . . 73.1.2 Flow-Class-Identifiers and associated Policies . . . . . . . . . . . . . . . . . . . . . 83.1.3 Mapping Services to Flows and FC-IDs . . . . . . . . . . . . . . . . . . . . . . . . 8

    3.2 Scheduler Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2.1 Algorithm Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2.2 Design Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2.3 Relation Scheduler-Link Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3.3 Investigated Scenarios and Performance Measures . . . . . . . . . . . . . . . . . . . . . . . 113.3.1 Scenario 0: Fully Loaded System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.3.2 Scenario 1: File Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.3.3 Scenario 2: Web Browsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.3.4 Scenario 3: Voice Over IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    3.4 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

  • 4 Scheduling Algorithms for Super 3G 134.1 High Level Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134.2 Scheduling Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.3 Inter-FC-ID Schedulers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    4.3.1 Best Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154.3.2 Guaranteed Bit Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    4.4 Intra-FC-ID Schedulers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.4.1 Fair Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.4.2 Proportional Fair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.4.3 Exponential Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.4.4 VoIP Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    5 Simulations 215.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    5.1.1 Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.1.2 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.1.3 Placement and Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.1.4 Radio Link Control & Medium Access Control . . . . . . . . . . . . . . . . . . . . 215.1.5 Downlink Physical Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.1.6 Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    5.2 Scenario 0: Fully Loaded System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.2.1 Quality of Service Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.2.2 User Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.2.3 Network Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255.2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    5.3 Scenario 1: File Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275.3.1 Network Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.3.2 User Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    5.4 Scenario 2: Web Browsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.4.1 Network Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.4.2 User Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    5.5 Scenario 3: Voice Over IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    6 Conclusions and Future Works 376.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    A Opposition Report by Sha Yao 39A.1 General Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39A.2 Suggestions and Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    Bibliography 42

  • List of Figures

    2.1 A possible evolved 3G architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Schematic data flow through the RLC and MAC layers for downlink traffic . . . . . . . . 52.3 Physical Layer Structure of Super 3G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    3.1 Dividing resources between SIG, GBR, and BE traffic . . . . . . . . . . . . . . . . . . . . 83.2 Mapping FC-IDs to QoS policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.3 An example of mapping between services and IP flows to FC-IDs . . . . . . . . . . . . . . 9

    4.1 General scheme of the Super 3G scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.2 Two-Layers Scheduler for the Best Effort flow . . . . . . . . . . . . . . . . . . . . . . . . . 15

    5.1 Simulation environment overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.2 Best Effort FSB Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.3 User throughput distribution for scenario 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 265.4 Inter-arrivals time distribution for scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . 275.5 Cell Throughput for scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.6 Link utilization for scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.7 Fairness for scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.8 Mean User Throughput for scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.9 Reading Time distribution for scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.10 Cell Throughput for scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.11 Link Utilization for scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.12 Fairness for scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.13 Mean User Throughput for scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.14 User Delay Performance for Scenario 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

  • This page intentionally contains only this sentence.

  • List of Abbreviations

    Table 1: Abbreviations related to the network architecture

    Abbreviation MeaningGSN GPRS Support Node

    GGSN Gateway GPRS Support NodeSGSN Serving GPRS Support Node

    UTRAN UMTS Terrestrial Radio Access NetworkRNC Radio Network Controller

    Node B Base StationUE User Equipment

    Table 2: Abbreviations related to the technologies

    Abbreviation MeaningWCDMA Wideband Code Division Multiple Access

    FDMA Frequency Division Multiple AccessOFDMA Orthogonal Frequency Division Multiple Access

    GPRS General Packet Radio ServiceHSDPA High Speed Downlink Packet Access

    S3G Super 3G

    Table 3: Abbreviations related to protocols

    Abbreviation MeaningTCP Transmission Control ProtocolUDP User Datagram ProtocolSIP Session Initiation Protocol

    SDP Session Description Protocol

  • Table 4: Abbreviations related to standardization

    Abbreviation Meaning3GPP 3rd Generation Partnership ProjectUTRA Universal Terrestrial Radio Access

    3G Third generation digital mobile network4G Fourth generation digital mobile network

    Table 5: Abbreviations related to Super 3G data flows

    Abbreviation MeaningFC-ID Flow Class Identifier

    SIG SignalingGBR Guaranteed Bit Rate

    BE Best Effort

    Table 6: Abbreviations related to the scheduling algorithms

    Abbreviation MeaningMaxSIR Maximum Signal to Interference Ratio

    PF Proportional FairER Exponential Rule

    ER2 Modified Exponential ruleFT Fair Throughput

    VoIP Voice over IP scheduler

  • Chapter 1Introduction

    Third generation (3G) wireless systems, based on the Wideband CDMA (WCDMA) radio access tech-nology, are now being deployed on a wide scale all over the world. The first step in the evolution of theWCDMA radio access technology has also been taken by the 3rd Generation Partnership Project (3GPP)through the introduction of High Speed Downlink Packet Access (HSDPA) (Parkvall et al., 2001) andEnhanced Uplink (Parkvall et al., 2005). These technologies provide 3GPP with a radio access technologythat will be highly competitive in a mid-term future.

    However, user and operator requirements and expectations are continuously evolving and new com-peting radio access technologies are emerging. Thus, it is important for 3GPP to start considering thenext steps in the 3G evolution, in order to ensure 3G competitiveness in a ten years perspective andbeyond. As a consequence, 3GPP has launched the Study Item Evolved UTRA and UTRAN, the aimof which is to study means to achieve further substantial leaps in terms of service provisioning and costreduction. The overall target of this long-term 3G evolution, sometimes also referred to as Super 3G(S3G), is to arrive at an evolved radio access technology that can provide service performance in parwith or even exceeding that of current fixed-line accesses, at substantially reduced cost compared tocurrent radio access technologies. As it is generally assumed that there will be a convergence towardsthe use of Internet Protocol (IP) based protocols, i.e., all services in the future will be carried on topof IP, the focus of this evolution should be on enhancements for packet-based services. 3GPP aims toconclude on the evolved 3G radio access technology in 2007, with subsequent initial deployment in the2009-2010 time perspective. At this point, it is important to emphasize that this evolved radio accessnetwork (RAN) is an evolution of current 3G networks, building on already made investments. Amongothers, the targets of the long-term 3G evolution are (3GPP, 2004b):

    • The possibility to provide significantly higher data rates, compared to the current steps of the 3Gevolution (HSDPA and Enhanced Uplink), with target peak data rates of more than 100 Mbps forthe downlink direction and more than 50 Mbps for the uplink direction. In addition to very highpeak data rates, the radio access technology should be capable of providing high data rates withwide-area coverage (’high cell-edge bit-rates’).

    • The possibility to offer significantly reduced control- and user-plane latency, with a target of lessthan 10 ms user-plane RAN round trip time (RTT) and less than 100 ms channel-set-up delay.

    • Improved spectrum efficiency, targeting an improvement in the order of a factor three comparedto current standards.

    • Reduced cost for operator and end user. Improved coverage and spectrum efficiency is one meansto reduce operator cost. However, also transport network cost and deployment effort need to beconsidered.

    • Spectrum flexibility, enabling deployment in different spectrum allocations. A consequence ofthis is that the evolved radio access technology should support flexible transmission bandwidth

  • 2 Introduction

    and duplex arrangement. In addition, smooth migration into other frequency bands, for examplesmooth migration into spectrum currently used for 2nd generation (2G) cellular technologies suchas GSM and IS-95.

    One additional requirement is the possibility for smooth introduction of new technology. Thus,any new or evolved radio access technology must be able to co-exist with current 3G radio accesstechnologies and radio network architectures and vice verse. To achieve the above-mentioned targets,3GPP needs to consider new radio transmission technologies as well as updates and modifications to theexisting radio network architecture. Many such technologies have been proposed in the context of newfourth generation (4G) mobile systems research, see e.g. Astely et al. (2006); Mino Diaz et al. (2004);Yu et al. (2005); Abeta et al. (2002). However, in order to protect operator and vendor investments, theperformance gain of any proposed update to or evolution of the 3G radio access or RAN must alwaysbe traded off against its impact on already made investments.

    S3G is characterized by that fact that only shared channels are used. As a consequence, dividingthe resources between the users is of critical importance. The entity performing this task is called thescheduler. In this project, we study the downlink scheduling mechanisms. The use of scheduling inwireless communications allows the combination of efficient utilization of the wireless channels, and thepossibility for fine-grained adaptation of the resource utilization to the required service levels. It is thusa combination of efficiency and flexibility:

    Spectrum Efficiency: The channel quality varies over time and frequency, due to radio interferenceand the mobility of the users. Fading (e.g. slow fading, shadow fading) will also cause bad channelconditions.

    Fulfilling Service Requirements: Providing stable and predictable data transmission services de-spite the variability of the wireless channel is another issue for data communication over wirelesslinks.

    A central component in the scheduling approach is the mobile radio channel predictor. Studies haveshown that it is feasible to predict the received power variations quite accurately, for several millisecondsinto the future, even for fast moving vehicular users. Having these predictions for all wireless channels,they can be used for planning of the transmissions to the different users.

    In this thesis work, choice has been made to facilitate the task of the scheduler by using theDifferentiation of Services paradigm (Blake et al., 1998) adapted to 3GPP wireless technology (seeLudwig et al. (2006)). By marking each packets entering the network, we are able to map each serviceto a policy profile. The task of the scheduler is then to enforce these policies. As a consequence, on ahigh level perspective, this thesis work aims at answering the following questions: Is the Differentiationof Services suitable for wireless links? Can we develop a S3G scheduler which respects the associatedpolicy profiles? If yes, which scheduling algorithms are more appropriate to the most commonly usedservices?

    This report is organized as follows. Chapter 2 gives an overview of what could be Super 3G, interms of architecture, quality of service and physical layer. Chapter 3 introduces the Differentiation ofServices paradigm and gives a detailed presentation of what scenarios are under study, and the associatedperformance measures. In Chapter 4 is presented the main contribution of this work: the schedulingalgorithms. Chapter 5 presents the simulation results. Finally, conclusions are drawn and possible futureworks are stated.

  • Chapter 2System Overview

    This chapter gives a Super 3G system overview. It is worth noting that what is presented representsan assumption of what will be Super 3G ; in other words, it does not constitute an evaluation of thestandard (standardization should be concluded by 2007, see 3GPP (2004b)).

    A top-down approach is followed, beginning with architecture aspects and ending with physical layerissues. Section 2.1 gives an overview of the system architecture; Section 2.2 intoduces the quality of servicetechniques, whereas Section 2.3 and Section 2.4 describes the link and physical layers respectively. Thecontent of this chapter is extracted from Dahlman et al. (2006).

    2.1 Super 3G Architecture

    Given the requirements of reduced latency and cost for Super 3G, it is natural to consider systemarchitectures that contain a reduced number of network nodes. This would reduce both the overallprotocol-related processing as well as the number of interfaces, which in turn reduces the cost of inter-operability testing. A reduction of the number of nodes would also give the possibility for optimizationof radio interface protocols, e.g. the possible merging of control-plane protocols. This could allow forfaster session setups due to shorter signaling sequences. Figure 2.1 illustrates the current 3GPP Release6 (Rel − 6) architecture and one possible path for an architecture evolution.

    GSN+

    RNC+

    Node B Node B

    RNC+

    Node B Node B

    UE

    GGSN

    SGSN

    RNC

    Node B Node B

    RNC

    Node B Node B

    UE

    Figure 1: The current 3GPP Rel-6 architecture (left) and one possible evolved 3G architecture – Reducing the number of nodes along the user plane data path from 4 to 3 (right).

    In Rel-6 the Gateway GPRS Support Node (GGSN) acts as an anchor node in the home network. In Rel-6 all traffic is typically routed back to the home network, so that a concise service environment can be maintained while also allowing the operator to filter traffic and to provide security to the end-user e.g. by means of firewalls. The Rel-6 RNC handles radio resource management, local mobility management, bearer control and transport network optimization. It further acts as a termination point for some radio protocols. The SGSN acts as an anchor node in the visited network and also handles mobility management and session management.

    In an evolved architecture the GGSN functionality still needs to be placed in the home network in order to ensure roaming and consistency in the service environment. The Node B will still need to handle the wireless access. From that reasoning the natural way forward is to either investigate a merge of the Serving GPRS Support Node (SGSN) and Radio Network Controller (RNC) into a central anchor node (as depicted in Figure 1) or a distribution of their functionality, leading to the complete removal of both of these nodes.

    Also in an evolved architecture there might still be a need for a centralized anchor node in the visited network. The most important reasons for this are:

    • Mobility: Under the assumption that the same level of mobility as in 2G/3G networks should be supported, the central node can ensure good handover performance with minimum service interruption for the end user. The central node also hides the movements of the UE from the home network.

    • Security: Ciphering of user data, integrity protection of signaling and availability of sensitive user subscription information and ciphering keys is more efficiently carried out in a central site, where the physical site security can be kept high, than in the Node B. This argumentation assumes that an evolved network shall provide at least the same level of security as in todays network.

    • Transport Network Efficiency: There are several benefits for the transport network when using a central node. IP-related headers can be compressed in a central point which gives gains not only on the radio interface, but also on the last mile link to the Node B. Another benefit is more resource efficient forwarding of user data between

    Figure 2.1: A possible evolved 3G architecture

    In WCDMA Rel−6 the Gateway GPRS Support Node (GGSN) acts as an anchor node in the homenetwork. Typically all traffic is routed back to the home network, so that a concise service environmentcan be maintained while also allowing the operator to filter traffic and to provide security to the end-user

  • 4 System Overview

    e.g. by means of firewalls. In an evolved architecture the GGSN still needs to be placed in the homenetwork in order to ensure roaming and consistency in the service environment. The Node B will stillhandle the wireless access. From that reasoning the natural way forward is to either investigate a mergeof the Serving GPRS Support Node (SGSN) and Radio Network Controller (RNC) or a completeremoval of both these nodes.

    In Rel−6, the RNC handles radio resource management, mobility management (locally), call controland transport network optimization. It further acts as a termination point for the radio protocols. TheSGSN acts as an anchor node in the visiting network and also handles mobility management and sessionmanagement. Also in an evolved architecture there is a need for a centralized node in the visited networkin between the Node B and the GGSN in order to hide the mobility of the User Equipment (UE) to theGGSN . Such a central node would also allow the operator to reuse existing infrastructure investmentsrelated to site security, maintenance and transmission. This node can contain functionality from both thecurrent RNC and SGSN and is denoted RNC+ in Figure 2.1. However, some of the SGSN functionalitymay preferably be located in the GGSN . Hence, the enhanced GGSN is denoted GSN+ in the figure.Although the exact definition of GSN+ and RNC+ functionality is ongoing work, a general principleis that the SGSN functionality required in a visited network (for roaming) needs to be located in theRNC+, whereas all other functionality can be located in the GSN+. Radio interface protocols involvingthe UE should be located in the visited network in order to minimize delays. Following these principles,mobility management will be simplified with minimum impact on GSN+.

    2.2 Quality of Service

    For the purposes of this description, a network is said to support QoS if it is able to offer different (anddifferentiable) levels of service quality over a shared infrastructure. It should be noted that the conceptpresented below should be seen as one possible evolution of the current 3GPP QoS concept.

    In the concept, service differentiation is enabled by classification and marking of each packet at thenetwork edge (i.e., GSN+ for downlink traffic). The edge node classifies each incoming packet intodifferent pre-defined service classes, e.g. Internet Access and Voice over IP (V oIP ). This classificationcould for example be done on the basis of information contained in the protocol headers, and/or basedon the physical interface on which the packet arrives. Following the classification, the packet is marked,e.g. by using Differentiated Services (DiffServ) code points, to reflect the classification. This markingis then used by each subsequent node to identify the service class to which the packet belongs. Theedge node further performs rate policing to ensure that the flow does not exceed a specified maximumbit-rate. For some service classes (e.g. Internet Access), this maximum bit-rate may be specified ona subscription-basis, whereas for others (e.g. V oIP ) it may be specified on a session-basis during thesession set-up phase.

    Once all incoming packets have been marked and policed, each node in the data path then uses themarkings to carry out appropriate queuing and policy-based scheduling. The queuing in the nodes maybe service class dependent, i.e., the size and dropping strategies of the queue may differ depending on thecharacteristics of the traffic belonging to the service class. Policy-based scheduling denotes the process ofscheduling according to pre-defined policies. Such policies can e.g. govern the distribution of bandwidthbetween different service classes. It is foreseen that such policies can be modified dynamically dependingon the expected usage of particular services. It should be possible for the operator to push new policiesto the relevant nodes through the network management system.

    2.3 Link Layer

    In the Super 3G context, a fixed Radio Link Control (RLC) Protocol Data Unit (PDU) size is regardedto be too inflexible to operate over a wide range of data rates. Small PDUs lead to too large headeroverhead while large PDUs would introduce too much padding overhead for small packets like V oIPframes or TCP acknowledgments. Therefore, another solution, denoted the Packet-Centric link layer,has been introduced by Reiner et al. (2005). The concept foresees two Layer 2 Automatic Repeat Request

  • 2.4 Physical Layer 5

    (ARQ) protocols like in Rel−6. The RLC protocol operates between RNC+ and UE, while the HybridARQ (HARQ) protocol is embedded in the MAC layer and operates between Node B and UE. Thekey characteristic of the Packet-Centric link layer is to map packets, i.e., either IP packets or RadioResource Control (RRC) messages, one-to-one to RLC PDUs, thereby making the size of these PDUsvariable as depicted in Figure 2.2.

    would introduce too much padding overhead for small packets like VoIP frames or TCP acknowledgements. Therefore, another solution, denoted the Packet-Centric link layer, is outlined here.

    The concept foresees two Layer 2 Automatic Repeat Request (ARQ) protocols like in Rel-6. The RLC protocol operates between RNC+ and UE, while the Hybrid ARQ (HARQ) protocol is embedded in the MAC layer and operates between Node B and UE. The RLC protocol is needed to provide a reliable mobility and ciphering anchor point and to cope with congestion losses on the Iub interface, while radio interface transmission errors are typically not handled by the RLC, but by the HARQ protocol.

    The key characteristic of the Packet-Centric link layer is to map packets, i.e., either IP packets or Radio Resource Control (RRC) messages, one-to-one to RLC PDUs, thereby making the size of these PDUs variable as depicted in Figure 2. This concept deems segmentation and concatenation at the RLC layer obsolete, thereby eliminating padding overhead. An additional field, specifying the PDU size, is required in the protocol header, adding some overhead. However, because padding is avoided, an overall gain in terms of overhead is achieved.

    IP Packet IP Packet

    RLC Payload RLC Payload RLC Payload

    RLC Header

    FEC Block FEC Block

    FEC Fragment FEC FragmentFEC Fragment FEC Fragment FEC Fragment

    RLC Payload

    FEC Block

    FEC Fragment

    Frame 1 Frame 2 Frame 1 Frame 2

    IP Packet

    RNC+ Node B UE

    RLC

    MAC/L1

    RLC Payload

    FEC Block

    RLC Payload

    Coding Coding Decoding Decoding

    IP PacketIP Packet IP Packet

    RLC Payload RLC Payload RLC Payload

    RLC Header

    FEC Block FEC Block

    FEC Fragment FEC FragmentFEC Fragment FEC Fragment FEC Fragment

    RLC Payload

    FEC Block

    FEC Fragment

    Frame 1 Frame 2 Frame 1 Frame 2

    IP Packet

    RNC+ Node B UE

    RLC

    MAC/L1

    RLC Payload

    FEC Block

    RLC Payload

    Coding Coding Decoding Decoding

    IP Packet

    Figure 2: Schematic data flow through the RLC and MAC layer for downlink traffic

    In addition, the concept has the advantage that IP packets become visible in the Node B, because each RLC PDU corresponds to exactly one IP packet. This fact can be exploited by the scheduler in the MAC layer, which now sees complete IP packets as opposed to segments thereof. This is expected to allow for more efficient scheduling decisions.

    A potential problem of the packet-centric concept is that one RLC PDU may be too big to be transmitted in one frame, e.g., when the receiver is experiencing bad signal quality. In this case, segmentation is required in the Node B. However, instead of segmenting an RLC PDU into multiple pieces, it is proposed to first encode the RLC PDU into Forward Error Correction (FEC) blocks and then to use rate matching to form FEC fragments, which fit into the available radio resources. If the RLC PDU was large, this may result in a very high initial code rate, in some cases even higher than one, making it highly unlikely that such a transmission can be decoded correctly. Therefore, in combination with Incremental Redundancy HARQ, a so-called autonomous retransmission is performed, whereby more data from this PDU is transmitted, without waiting for a negative acknowledgement. This is repeated until the probability of successful reception has exceeded a certain threshold. Subsequently, conventional HARQ feedback is used to request further retransmissions if needed.

    The RAN transport is expected to remain an expensive part of the network and over-dimensioning of these links cannot generally be assumed. Therefore, packet losses due to congestion in the transport network will occur despite deployment of enhanced flow control mechanisms. A further enhancement deals with this problem. In such scenarios, the Node B can act as an RLC relay node and send negative acknowledgements back to the RNC to

    Figure 2.2: Schematic data flow through the RLC and MAC layers for downlink traffic

    2.4 Physical Layer

    A promising candidate for the long-term 3G evolution of the downlink physical layer is OFDM1, atransmission scheme well known from literature (see Ahlin et al. (2004)) and suitable for the largebandwidths envisioned for the evolved radio access. OFDM also allows for a smooth migration fromearlier radio access technologies, is known for high performance in frequency-selective channels andenables frequency-domain adaptation, provides benefits in broadcast scenarios and is well suited formultiple input multiple output (MIMO) processing.

    2.4.1 Downlink: OFDM with Frequency Adaptation

    The basic time-frequency structure for the OFDM downlink is illustrated in the left part of Figure 2.3,where the basic radio resource is a chunk.

    User A User B User C User D time

    frequency

    Downlink

    Chunk

    time

    frequency

    Uplink

    ≈200 kHz 0.5 ms

    Figure 4: Time-frequency structure for downlink (left) and uplink (right).

    Exploiting channel variations in the time domain through link adaptation and channel-dependent scheduling, as is done in current 3G systems such as WCDMA and HSDPA, has been shown to provide a substantial increase in spectral efficiency. With the evolved radio access, this is taken one step further by adapting the transmission parameters not only in the time domain, but also in the frequency domain. Frequency-domain adaptation is made possible through the use of OFDM and can achieve large performance gains in cases where the channel varies significantly over the system bandwidth. Thus, frequency-domain adaptation becomes increasingly important with an increasing system bandwidth. As an example, for a so-called 3GPP Typical-Urban channel and a system bandwidth of 20 MHz, combined time- and frequency-domain adaptation may yield a capacity gain of a factor two compared to time-domain adaptation only. Information about the downlink channel quality, obtained through feedback from the terminals, is provided to the scheduler. The scheduler determines which downlink chunks to allocate to which user and dynamically selects an appropriate data rate for each chunk by varying the output power level, the channel-coding rate and/or the modulation scheme. Quadrature phase shift keying (QPSK), 16 quadrature amplitude modulation (16QAM) and 64QAM modulation schemes are supported in the downlink.

    2.4.2 Uplink – Single-Carrier FDMA with Dynamic Bandwidth For uplink transmission, an important requirement is to allow for power-efficient user-terminal transmission to maximize coverage. Single-carrier frequency domain multiple access (FDMA) with dynamic bandwidth, illustrated to the right in Figure 4, is therefore preferred. For each time interval, the base station scheduler assigns a unique time-frequency interval to a terminal for the transmission of user data thereby ensuring intra-cell orthogonality. Primarily time-domain scheduling is used to separate users, but for terminals with limitations in either transmission power or the amount of data awaiting transmission, frequency-domain scheduling is also used. Note that a terminal is only assigned chunks contiguous in the frequency-domain to maintain the single-carrier properties and thereby ensuring power-efficient transmission. Frequency-domain adaptation is typically not used in the uplink due to lack of channel knowledge, as each terminal cannot continuously transmit a pilot signal covering the whole frequency domain. Slow power control, compensating for path loss and shadow fading, is sufficient as no near-far problem is present due to the orthogonal uplink transmissions.

    Multi-path propagation is handled by appropriate receiver processing at the base station, aided by the insertion of a cyclic prefix in the transmitted signal. The cyclic prefix in the uplink is slightly larger than in the downlink to accommodate for inaccuracies in uplink timing between different users. Chunk parameters, coding and modulation are similar to the downlink transmission.

    Figure 2.3: Time-frequency structure for downlink (left) and uplink (right).

    Exploiting channel variations in the time domain through link adaptation and channel-dependentscheduling, as is done in current 3G systems such as HSDPA, has been shown to provide a substantialincrease in spectral efficiency (Holma and Toskala, 2004, p.307). This will be taken one step further byadapting the transmission parameters not only in the time domain, but also in the frequency domain.

    1Orthogonal Frequency Division Multiplexing

  • 6 System Overview

    Frequency-domain adaptation is made possible through the use of OFDM and can achieve large perfor-mance gains in cases where the channel varies significantly over the system bandwidth. Thus, frequencydomain adaptation becomes increasingly important with an increasing system bandwidth. As an exam-ple, for a so-called 3GPP Typical-Urban channel and a system bandwidth of 20 MHz, combined timeand frequency domain adaptation may yield a capacity gain of a factor two compared to time-domainadaptation only. Information about the downlink channel quality, obtained through feedback from theterminals, is provided to the scheduler. The scheduler determines which downlink chunks to allocateto which user and dynamically selects an appropriate data rate for each chunk by varying the outputpower level, the channel-coding rate and/or the modulation scheme. QPSK2, 16QAM3 and 64QAMmodulation schemes could be supported in the downlink.

    2.4.2 Uplink: Single-Carrier FDMA with Dynamic Bandwidth

    For uplink transmission, an important requirement is to allow for power-efficient user-terminal trans-mission to maximize coverage. Single-carrier frequency domain multiple access (FDMA) with dynamicbandwidth, illustrated to the right in Figure 2.3, is therefore preferred. For each time interval, the basestation scheduler assigns a unique time-frequency interval to a terminal for the transmission of user datathereby ensuring intra-cell orthogonality. Primarily time-domain scheduling is used to separate users,but for terminals with limitations in either transmission power or the amount of data awaiting transmis-sion, frequency-domain scheduling is also used. Note that a terminal is only assigned chunks contiguousin the frequency-domain to maintain the single-carrier properties and thereby ensuring power-efficienttransmission. Frequency adaptation may not be used in the uplink due to lack of channel knowledge,as each terminal cannot continuously transmit a pilot signal covering the whole frequency domain. Slowpower control, compensating for path loss and shadow fading, is sufficient as no near-far problem ispresent due to the orthogonal uplink transmissions.

    2Quadrature Phase Shift Keying3Quadrature Amplitude Modulation

  • Chapter 3Context, Problem Definition andPerformance Measures

    This chapter presents the context in which this thesis work is done as well as a description of theproblem. Performance measures are also introduced. Section 3.1 gives a detailed overview of the ServiceDifferentiation paradigm; in Section 3.2, the scheduler design is presented, whereas Section 3.3 describesthe scenarios and associated performance measures which are under investigation in this project; toconclude, a problem definition is stated in Section 3.4.

    3.1 Service Differentiation

    This section presents the mechanisms of service differentiation that could be used in Super 3G. From lowto high level, Section 3.1.1 presents the three different types of flow, Section 3.1.2 describes the policyprofiles associated with each flow, and finally Section 3.1.3 illustrates the mapping between services,flows and policies.

    3.1.1 Super 3G Flows and Allocation of Cell Bandwidth

    In the context of Super 3G, packet flows can be divided into three categories:

    • SIG Flows: Signaling,

    • GBR Flows: Guaranteed Bit Rate,

    • BE Flows: Best Effort.

    Note that, from a scheduling point of view, priorities between these flow types are strict:PSIG > PGBR > PBE .

    In what concerns the GBR flow, to guarantee a certain bit rate means that, independently of thechannel conditions, an operator is able to deliver to a user a given service at a certain rate. One wayto increase the probability that the guarantee can be fulfilled is to use an admission control1, which, infunction of the network and channel conditions, decide whether to admit or reject a user request. Notethat in this project, no admission control is investigated. It is assumed that the allocated bandwidth Cfor the GBR flow is always sustainable in the cell. On the other hand, best effort traffic is carried in theremaining capacity. Here, only relative guarantees are given to the users. In other words, no mechanismprovides any insurance in terms of data rate, or delay.

    1 By proper-dimensioning the network, it is possible to ensure adequate capacity without admission control. In anycase, these guarantees are statistical.

  • 8 Context, Problem Definition and Performance Measures

    Figure 3.1 gives an illustration of how the bandwidth is allocated to the different flows. Prioritiesbetween these flow types can be viewed from bottom to top. The capacity dedicated to SIG and GBRflows should not be altered by the channel variations. As a consequence, some margin has to be inserted.

    Limited Internal R. Ludwig & H. Ekström 2005-09-1324 (34)

    Aggregate Cell Throughput

    Time

    C

    Available for BE Traffic

    Providing QoSDividing Resources Between SIG, GBR, and BE Traffic

    Allocated to GBR Traffic (GBR Capacity)

    C kb/s Allocated for GBR Traffic Dimensioning Tradeoff for C …

    – Avoid Call Dropping Caused by "Throughput Dips"– Avoid Starvation for BE Traffic

    Unused SIG + GBR Capacity Available for BE Traffic

    Allocated to SIG Traffic (NAS and AS)

    Figure 3.1: Dividing resources between SIG, GBR, and BE traffic

    3.1.2 Flow-Class-Identifiers and associated Policies

    In the previous Section, we defined the three different types of flows assumed to be used in Super 3G. Foreach flow, we now define a certain number of QoS classes (or Flow-Class-Identifiers, FC-IDs from nowon). Each FC-ID points to a policy profile. In fact in the Diffserv paradigm,, packets are marked whenentering the network, and thereby associated with a forwarding policy. We concentrate on the GBR andBE flows here.

    In what concerns the FC-IDs for GBR flows, it can be seen from Figure 3.2 that the policy profile issimply a priority. In fact when dealing with GBR traffic, the data rate is fixed and guaranteed, so weonly need to set priorities between different services, within the GBR flow.

    The main characteristic of the BE traffic is to not have absolute guarantee on the resources thenetwork allocates. In fact, resources are allocated to SIG and GBR flows first, and BE should deal withthe remaining. As a consequence, only relative guarantees can be offered. The policy profile attributesused to enforce these relative guarantees are the Committed rate, the Peak Rate and the Priority (seeFigure 3.2). The definition of Committed Bit Rate (CR) and Peak Rate (PR) allows an efficient divisionof the bandwidth, permits an operator to control the division between service classes and minimizesstarvation. For example, if a BE FC-ID has a Committed Rate of 20 %, and a Peak Rate of 80 % itmeans that the network will allocate resources so that in average, users from this FC-ID will receive atleast 20 % , but no more than 80 %, of the total information bits sent for the BE flow2. Note that thispercentage can be, depending of the channel conditions, synonymous of a high or low data rate.

    3.1.3 Mapping Services to Flows and FC-IDs

    Sections 3.1.1 and 3.1.2 presented how the traffic differentiation is handled within the network. Figure 3.3illustrates how different services can be mapped to flows and FC-IDs. Taking for example the operatorcase, we see that ’Internet Premium’ and ’Internet Standard’ are both mapped to BE FC-IDs, but withdifferent priorities. Another example is the weShare service3, which is mapped to different flows: GBRfor what concerns the voice call, BE for the files exchange. This mapping reflects the operator policy.Figure 3.3 underlines another difference between GBR and BE flows: GBR typically requires a bearerto carry session level signaling such as SIP and SDP4.

    2Another way of defining the committed rate could be in terms of resources allocated by the network: this definitionwould, at least on a short term basis, maybe increase the network capacity. However on a long term basis, when the systemreaches a steady state, the two definitions are equivalent.

    3Service that combines voice calls with instant sharing of pictures, and other content4Session Initiation Protocol and Session Description Protocol

  • 3.1 Service Differentiation 9

    Limited Internal R. Ludwig & H. Ekström 2005-09-1318 (34)

    Associating Services with QoS Mapping "Colors" to QoS Policies

    0

    234

    111213

    15

    Flow-Class-IDs

    1

    14

    FC-IDs forGBR Flows

    FC-IDs forBE Flows

    QoS Policy Profiles per Flow-Class-ID

    < Priority >

    < Committed Rate, Peak Rate, Priority >

    Per Flow Class(Traffic Aggregate)

    FC-ID Interpreted as (Strict) Priority

    FC-ID Interpreted as Pointer to Policy Profile

    FC-IDs forSIG Flows

    Figure 3.2: Mapping FC-IDs to QoS policies

    Limited Internal R. Ludwig & H. Ekström 2005-09-1317 (34)

    Associating Services with QoSMapping Services & IP Flows to "Colors"

    Video on Demand

    Internet Premium

    weShare

    Operator APN

    Intranet

    Corporate APN 1

    Intranet

    Corporate APN 2

    0

    234

    111213

    15

    Flow-Class-IDs ("Colors")

    RRC

    unused

    unused

    FC-IDs forGBR Flows

    FC-IDs forBE Flows

    1 SM, SIP

    Video

    Audio

    Internet Standard

    14

    FC-IDs forSIG Flows

    Figure 3.3: An example of mapping between services and IP flows to FC-IDs

  • 10 Context, Problem Definition and Performance Measures

    3.2 Scheduler Design

    In the previous Section, we introduced the framework in which the scheduling decisions are made: ServicesDifferentiation. We now focus on the scheduler design, stating what is the informations we have to makeour decisions (e.g. information such as the radio conditions experienced by the users), but also whatchoices have been made for the implementation of the proposed schemes. Note that in what follows, auser is defined as en entity waiting for data.

    3.2.1 Algorithm Inputs

    It is assumed that the channel estimates, for all the OFDM subbands, are available on the downlink.Also, the user throughputs5 for each QoS class is memorized during a certain window. Finally, thescheduler internally stores when a user has been scheduled for the last time.

    3.2.2 Design Parameters

    In section 3.2.1, we mentioned that the user throughputs are available on a certain time window. Thelength of this window, or memory of the throughput measurements, is an important design parameter:the shorter it is, the more the scheduling algorithm is fair on the short term. In this work, it is proposedto adapt the length of the window as a function of the service which is run. In fact, the length of thewindow should be less or equal than the downloading time. Measuring the user throughputs is done sothat more fairness is introduced in the scheduling decisions, on an object basis (e.g. TCP object). As aconsequence, a file transfer service could be associated with a window of length up to few seconds, but ashort window (few hundred milliseconds) could characterized web browsing service. Note that in a realsystem, the length of the window size could be included in the policy profile.

    Having a short or large window for the throughput measurements does not increase the complexityof the system because such a window can be implemented with a filter (AR-process). At the contrary,the next parameter can be critical: the freshness of the measurements (how often the measurementsare updated). Updating the measurements for all users at each tti6 would give perfect freshness, butthe number of computations may lead to an unpractical scheme. On the other hand, not updatingthe statistics regularly could cause delays (e.g. a user being categorized with a bad SIR, even thoughexperiencing good radio conditions at the present time, would not be scheduled). In this work, we assumeto have updated measurements at every time instant.

    3.2.3 Relation Scheduler-Link Adaptation

    It is well known that an ideal scheduler should take its decisions on a cross-layer basis, from transport tophysical layers. However in this work, we assume fixed power and an independent link adaptation block.It is worth mentioning that the work of Ruberg (2006)7 on the Super 3G down link link adaptation hasbeen integrated. As a consequence, two main interactions are possible between the scheduler and the linkadaptation (LA). The first one is a LA function deciding which OFDM chunks to use, given a numberof bits waiting to be transmitted; in the second one, the scheduler distributes the chunks according toits own algorithm and asks the LA how many information bits does the assignment made correspondsto.

    5By user throughput, we mean data which has been acknowledged by the receiver, e.g. in the HARQ6Transmission Time Interval7Master Thesis student at Ericsson Research (Linköping)

  • 3.3 Investigated Scenarios and Performance Measures 11

    3.3 Investigated Scenarios and Performance Measures

    The number of services proposed by telecommunication operators is expending everyday but they caneasily be factorized in three different categories, as a function of their sensitivity to the delay: notsensitive, sensitive and very sensitive. To characterize the scheduling algorithms we propose, we thusevaluate them on three different scenarios corresponding to the three level of sensitiveness to the delay.

    When comparing scheduling algorithms, it is also relevant to have a scenario in which the traffic modeldoes not influence the outcome. As a consequence, we first investigate the hypothetical case where allthe users have their buffers full, all the time.

    3.3.1 Scenario 0: Fully Loaded System

    As mentioned above, this case allows us to characterize the scheduling algorithms under study in thetheoretical case where every user has always something to receive. The Best Effort traffic schedulers willbe evaluated here, and the user data rate distribution will be our performance measure.

    3.3.2 Scenario 1: File Transfer

    In this scenario, each user requests one large file, downloads it, and then leaves the system. This couldillustrate users downloading MP3 songs or short videos. This is typically a Best Effort traffic. Here, weevaluate the scheduling algorithms performance by an analysis of the distribution of the user data rates,as a function of the arrival rate. The file size is fixed.

    3.3.3 Scenario 2: Web Browsing

    Most common service on Internet, the web browsing traffic is of special importance for next generationcellular networks. As for the file transfer scenario, this traffic is categorized Best Effort. Here the numberof users in the system is constant and the file size is fixed. The scheduling algorithms are evaluated withan analysis of the distribution of the user data rates, as a function of the number of users.

    3.3.4 Scenario 3: Voice Over IP

    Voice over the Internet Protocol (VoIP) is an old technology by itself, but telecommunication operatorsare expected to use its full potential in the upcoming years. This traffic is delay sensitive, and thusis mapped to a Guaranteed Bit Rate flow. In this scenario, each user receive a call during the wholesimulation time. The scheduling algorithms are evaluated with an analysis of the distribution of theuser packet delay. The objective is here to do a capacity estimation: number of users so that the 90th

    percentile of the packet delays is bellow 50 ms.

    It is worth noting that for all scenarios, we investigate the network utilization, by means of cellthroughput (number of information bits sent; average taken over all the sites in the network) and linkutilization (percentage of used chunks; average is taken over all the sites in the network). In what concernsthe user throughput distribution, we mainly focus on two metrics: the fairness and the mean throughput8.For the fairness metric, we use the so called Jain fairness index. If we denote the measurement of concernby ri for user i then it is given by: ( ∑N

    i=1 ri

    )2N

    ∑Ni=1 r

    2i

    , ri ≥ 0 ∀i

    where N is the number of samples. Note that for perfect fairness the index is 1. As the variance of themetric increases the index approaches 0 (Hosein and Makhijani, 2005).

    8Except for the fully loaded scenario where we analyze the cumulative density function of the user throughput.

  • 12 Context, Problem Definition and Performance Measures

    3.4 Problem Definition

    This project aims at designing a Super 3G scheduler within a DiffServ context. A special emphasiswill be put on the tradeoff between network capacity and user satisfaction maximization. Complexityrequirements will also be taken into account.

    More precisely, the two project goals are:

    • Verification of concept: designing a scheduler which respects the QoS policy profiles.

    • Evaluation of different schedulers: determining the best tradeoff between network capacity anduser satisfaction.

    Because performances are likely to be as a function of the services, an hybrid S3G scheduler, usingdifferent schedulers for different services and load is proposed.

  • Chapter 4Scheduling Algorithms for Super 3G

    This Chapter presents our scheduling solution for Super 3G. We show how the flow priorities and otherpolicies introduced in Section 3.1 are taken into account, and justify our choices according to relatedworks. Section 4.1 gives an high level overview, whereas in Section 4.2, the architecture of the S3Gscheduler is introduced. In Sections 4.3 and 4.4 we focus on the technical choices we make to solve theproposed scheduling problem.

    4.1 High Level Scheduler

    From the description of the different flows given in Section 3.1, we can directly give a high level descriptionof the Super 3G scheduler. In fact, as priorities between the three considered flows are strict, we obtainAlgorithm 1 presented below.

    Algorithm 1 Super 3G Scheduler

    • Begin

    – Schedule FC-IDs that are associated with the SIG Flow

    – Schedule FC-IDs that are associated with the GBR Flow

    – Schedule FC-IDs that are associated with the BE Flow

    • End

    Figure 4.1 illustrates this high level scheduler, adding what type of policy is used for each flow. Astrict priority based policy is used for SIG and GBR flows, and a more detailed one (see Section 3.1.2)governs the BE flow. It is worth mentioning that the algorithms presented in this Chapter describe theprocedure for the first transmission of packets. In what concerns the retransmissions on the HARQ-level,we assume that absolute priority is given1. Also, we do not consider the scheduling of the signaling flowand concentrate on the GBR and BE flows.

    1In fact, when resources are allocated to a given user, data from different FC-IDs can be sent; if an error occurs and athus a retransmission is needed, the scheduler only ’sees’ the total amount of data which has to be retransmitted and notthe different FC-IDs. As a consequence, no differentiation can be performed and choice has been made to give absolutepriority to the retransmitted data.

  • 14 Scheduling Algorithms for Super 3G

    Traffic from BE Flows Traffic from GBR Flows Traffic from SIG Flows

    Policy-BasedScheduling

    Strict-PriorityScheduling

    Strict-Priority SchedulingSIG > GBR > BE

    Figure 4.1: General scheme of the Super 3G scheduler

    4.2 Scheduling Architecture

    Three common scheduler algorithms known from the literature are Maximum Signal to Interference Ratio(MaxSIR), Proportional Fair (PF) and Fair Throughput (FT) (Ameigeiras et al., 2004; Wang and Lin,2004; Ericsson, 2004). Here, because of the policy profiles, the MaxSIR method, which maximizes theinstantaneous capacity of the network is not suited. In fact, this method does not give any guaranteein terms of resources allocation on a FC-ID basis. Taking into account the channel states and/or thethroughputs only, the PF and FT methods are not appropriate either for the same reasons: someguarantees are given between the users but not at the FC-ID level. Abedi (2004) proposes an adaptivescheme for packet scheduling weights (i.e. an adaptive PF). Even though attractive, this method suffersfrom a quite high complexity, as a lot of control parameters have to be updated at each time instant,which may not be practical in systems with a short tti as envisioned for Super 3G. Another approachconsists of defining a network revenue function, which, when optimized, allow capacity maximizationand QoS fulfillment (Hosein, 2002; Farrokhi et al., 2004). Based on the so-called barrier functions, aniterative algorithm is proposed to approximate the optimal solution; looping while the network revenueincreases, its complexity is also high.

    A characteristic of all these algorithms is that they all try to make the scheduling decision in a globalapproach, taking into account competition between FC-IDs and competition between users at the sametime. From the policy profiles described in Section 3.1, it has to be noted that Committed Rate, PeakRate and/or Priority characterize a given FC-ID, and not a particular user. As a consequence, thetwo main scheduling decisions (which class, which user) could be made separately. This therefore theapproach followed in this work.

    The so-called Divide and Conquer approach is adopted. In fact, the BE and GBR schedulers arecomposed of two layers. The first level scheduler sorts the different FC-IDs in function of their policies(e.g. Committed Rate for BE): the Inter-FC-ID Scheduler (Inter-S). The second chooses which users willbe scheduled in a given class: the Intra-FC-ID Scheduler (Intra-S). By splitting the complexity betweenInter-S and Intra-S, we are able to adopt practical methods (in terms of complexity) such as ProportionalFair or Fair Throughput for the Intra-S.

    4.3 Inter-FC-ID Schedulers

    Algorithms presented in this section intend to fulfill the policy profiles associated with BE or GBR FC-IDs. First we introduce our choices for the Best Effort traffic (Section 4.3.1); the GBR Inter-S is thenpresented in Section 4.3.2.

  • 4.3 Inter-FC-ID Schedulers 15

    4.3.1 Best Effort

    Best Effort will be an important part of the traffic in future mobile broadband networks. The capacityC assigned to the GBR flow can be tuned by the operator, an extreme case being an operator providingonly BE services to the customers. Then, note that some of the Internet services that are currentlydeveloped require a combination of GBR and BE flows (e.g. weShare), so that while not necessitatingvery short delays as VoIP for example2, reasonable data rates have to be achieved for the BE part. As aconsequence, the BE scheduler needs to be efficient, taking into account as much informations as possible(e.g. channel state). It also has to preserve a practical complexity, which, due to the very short Super3G tti, is an issue in this work. Figure 4.2 presents an high level overview of the proposed BE scheduler.

    BE Flow

    BE Inter-FC-ID SchedulerRanking Classes

    FC -ID 11 FC-ID 12 FC-ID 13

    BEIntra- FC -ID SchedulerSelecting users

    Figure 4.2: Two-Layers Scheduler for the Best Effort flow

    In what follows, λji is the user perceived throughput (or data rate) of user i (for the jth FC-ID). We

    also use the aggregate throughput for the jth FC-ID λj =∑

    i λji .

    As previously mentioned, the first level of the BE scheduler sorts the FC-IDs. From Section 3.1,the BE policy per flow class constitutes the Committed Rate, the Peak Rate and the Priority. As aresult, our algorithm makes its decision as a function of these parameters. To do that, we introduce anautoregressive process measuring the number of information bits allocated to each FC-ID at every tti.The memory of the process is 1500 slots, or 1 s. If

    {λ1, . . . , λm

    }are the aggregate throughputs for each

    FC-ID at time t, then the average number of bits allocated to each class is computed as follows:

    λ̄i(t + 1) =(1− α

    )λ̄i(t) + αλi, i = 1 . . .m, α =

    11500

    and the fraction of scheduled information bits (FSB) is defined as:

    FSBi =λ̄i∑m

    j=1 λ̄j, i = 1 . . .m

    Algorithm 2 describes the procedure (CR and PR stands for Committed Rate and Peak Rate respec-tively):

    2Delay perceived at the application layer, for the transmission of the whole object. At the IP level, high data ratescould require even shorter delays than VoIP (e.g. TCP)

  • 16 Scheduling Algorithms for Super 3G

    Algorithm 2 Best Effort Inter-FC-ID Scheduler

    • Begin

    – Compute the Fraction of Scheduled Bits for each BE classes:{

    FSB1, . . . , FSBm}

    – L1 ={

    i / FSBi < CR policyi

    }set of FC-IDs not reaching their Committed Rates

    – L2 ={

    i / CR policyi < FSBi < PR policyi

    }set of FC-IDs reaching their Committed

    Rates but bellow their Peak Rates

    – Sort L1 and L2 by class priority

    – Output{

    L1, L2

    }• End

    The algorithm operates in two steps. First, it selects the classes which do not satisfy the CommittedRate policy or are under the Peak Rate policy (independently). Then it sorts them as a function of thepriorities. By doing so, a class which does not satisfy its Committed Rate is prioritized, compared toone being under the Peak Rate (but above the Committed Rate) with higher priority. Given the Intra-Sscheduler, we are able to propose a framework for the global BE scheduler, in Algorithm 3:

    Algorithm 3 Best Effort Scheduling

    • Begin

    – Call BE Inter-FC-ID Scheduler (algorithm 2) and get a sorted list L of service classes,

    – i = 1,

    – While resources available,

    ∗ Call the Intra-FC-ID Scheduler on service class L(i),∗ i = i + 1,

    – End While,

    • End

    A description of possible Intra-FC-ID schedulers is given in Section 4.4. This high level descriptioncould be the basis for an hybrid scheduler, which would choose the algorithm to launch for a given FC-ID,as a function of parameters such as the load or the fairness users are experiencing. Such an approach islet for future works.

    4.3.2 Guaranteed Bit Rate

    The guaranteed bit rate scheduler is trivial in the sense that it makes strict priority decisions whenchoosing the FC-ID to schedule. In fact, when scheduling the next tti, it always serves the class havingthe highest priority first, and then goes to the second one, and so on (within the available resources). InAlgorithm 4, a description of the GBR scheduler is given:

  • 4.4 Intra-FC-ID Schedulers 17

    Algorithm 4 Guaranteed Bit Rate Scheduler

    • Begin

    – Consider{

    Qij

    }0

  • 18 Scheduling Algorithms for Super 3G

    Algorithm 5 Best Effort Intra-FC-ID Scheduler: Fair Throughput

    • Begin

    – Get the throughput for each user Λj ={

    λj1, . . . , λjp

    }(sorted in ascending order) for the

    considered class j

    – U1 ={

    i / λji < Λj}

    set of users having a throughput bellow the average

    – U2 ={

    i / λji > Λj}

    set of users having a throughput above the average

    – i = 1

    – While resources are available and queue for this class is not empty

    ∗ Allocate a new chunk for user U1(i), so that he gets the best chunk available in terms ofSignal to Interference Ratio

    ∗ i = (i mod #U1) + 1– End While

    – i = 1

    – While resources are available and queue for this class is not empty

    ∗ Allocate a new chunk for user U2(i), so that he gets the best chunk available in terms ofSignal to Interference Ratio

    ∗ i = (i mod #U2) + 1– End While

    • End

    Note that by allocating the user’s best chunk, we take into consideration the radio conditions eventhought the priority is here to be fair. This point should benefit this method in terms of link utilization.

    4.4.2 Proportional Fair

    The Proportional Fair scheduling algorithm was first described by Holtzman (2000). It intends to serveusers under very favorable instantaneous radio channel conditions relative to their average ones, thustaking advantage of the temporal variations of the fast fading channel (Ameigeiras et al., 2004). In otherwords, user priorities is given by the ratio between their instantaneous achievable rate and throughput.It is worth noting that instantaneous achievable rate is estimated with the Shannon bound. In fact, itis difficult to estimate what would be the rate associated to a chunk, as the link adaptation depends ofthe whole set of chunks which is allocated to a particular user. Algorithm 6 describes the procedure.

  • 4.4 Intra-FC-ID Schedulers 19

    Algorithm 6 Downlink Case - Best Effort Intra-FC-ID Scheduler: Proportional Fair

    • Begin

    – Let f :((CI )i, λi

    )→ log(1+(

    CI )i)

    λiand N=81 the number of OFDM chunks

    – Get the throughput for each user Λj ={

    λj1, . . . , λjp

    }for the considered class j

    – Let SIRk ={

    (CI )1, . . . , (CI )p

    }, user channel estimates of the considered class, for chunk k

    – For k=1 to N

    ∗ Allocate chunk k for the user maximizing f(SIRk,Λj)– End For

    • End

    If all users’ achievable data rate distributions are iid, the PF rule will achieve equal time share acrossthe user population in the long run, while utilizing multi-user diversity. If not, the relative time-sharingamongst users will depend on the mean-variance relationship of the achievable data rate distribution.Holtzman (2001) observed that users with channels subject to larger variation, all other things beingequal, enjoy a better service (higher throughput using less resources), than users with less varyingchannels.

    Even though not presented in this report, note that the Maximum Signal to Interference Ratio(MaxSIR) algorithm is investigated in this project. For each chunk, this approach selects the userexperiencing the best SIR. In this case, the f function introduced in Algorithm 6 would be the identityfunction, i.e. mapping the SIR to itself.

    4.4.3 Exponential Rule

    The Exponential Rule (ER) is an adaptation of the Proportional Fair algorithm (Ericsson, 2004); lesschannel dependent, but more delay intolerant, the procedure is the same as in Algorithm 6, but wherethe function f is defined as follow:

    f :((C

    I)i, λi, wi, w

    )→

    log(1 + (CI )i)eawi−aw1+√

    aw

    λi

    where wi is the waiting time3 for user i, w the average waiting time for users of a given FC-ID, and a afactor allowing to tune the impact of the delay4. The exponential part of the formula is called barrierfunction and its role is to prioritize users having a waiting time larger than the average. In other words,the ER rule lets the PF discipline control the scheduling algorithm as long as no delays grow too large.If any delay departs from the average5, then the exponential factor will increase in weight and increasethe priority for the user with larger delay. The ER rule is throughput-optimal, i.e. no user’s queue willgrow out of bounds under normal traffic conditions.

    In this project, we also investigate a modified version of the ER algorithm, denoted ER2 and definedby the following function f :

    f :((C

    I)i, λi, wi, w

    )→

    (log(1 + (CI )i)

    )2e

    awi−aw1+√

    aw

    λi

    In this case, the channel state is prioritized higher, in comparison to the ER rule.3Duration since the last time a user has been scheduled.4For example, it allows to choose the dimension of the operation: milliseconds, hundred milliseconds, etc.5More precisely, if the relative latency offset is greater than that of the others by an order of

    √w.

  • 20 Scheduling Algorithms for Super 3G

    4.4.4 VoIP Scheduler

    In this Section, we present an adaptation of the algorithm proposed by Hosein (2005) for the schedulingof VoIP traffic. In this case, the function f introduced in Algorithm 6 is defined as:

    f :((C

    I)i, qmax, qi

    )→

    log(1 + (CI )i)qmax − qi

    where qi is the queue length for user i, and qmax the maximum tolerated queue length. Our ameliorationof the algorithm differs in two ways. First, the queue length is measured in bits instead of VoIP frames.This allows more granularity, as the schedulers developed in this project do not ’see’ the individualpackets, but a number of bits waiting to be transferred. Second, if the queue length for user i is greateror equal to the maximum tolerated one, we propose to use − 1qmax−qi as denominator of the f function,i.e. multiplying the instantaneous achievable rate by a factor qi − qmax. This operation allows todifferentiate users experiencing poor performances. The value for qmax is set to 512 bits, i.e. two VoIPframes (corresponding to a delay less or equal than 40 ms). It is worth noting that waiting to have twopackets in the buffer could be interesting from a network optimization point of view. In fact, because ofthe small size of a VoIP packets, sending two packets independently is expensive in terms of signaling.

    Hosein (2005) argues that if we assume a small frame loss rate, then the users experienced throughputequals the bit arrival rate. Here the average bit rate is the same for all voice calls, thus all users experiencethe same long term throughput. That is why the throughput is not taken into account in the function f .

  • Chapter 5Simulations

    5.1 Model

    The radio network simulation environment used during this project contains all the features describedin Chapter 2. This section intends to give some model descriptions and parameter settings which wehave been using. Note that all parameters are not presented here, but a special emphasis has been putto have realistic values.

    5.1.1 Propagation

    Describing the attenuation that the transmitted signal experiences through the channel, the propagationmodel, based on Okumura-Hata, takes into account antenna gain, distance attenuation, shadow fading1

    and multipath fading2. Furthermore, a noise spectral power density N0 = −201 dB/Hz is assumed.

    5.1.2 Network

    The simulation area is composed of 9 hexagonal cells (3 sites and 3 cells per site) of radius 500 m. Theso-called wrap-around technique is used to avoid border effects.

    5.1.3 Placement and Mobility

    Users are randomly distributed over the simulation area, according to a uniform distribution. Usersmove in straight direction (direction distribution is uniformly distributed too), at a velocity of 2 m/s.

    5.1.4 Radio Link Control & Medium Access Control

    RLC and MAC can both be used in acknowledged and unacknowledged mode. Here we set RLC towork in acknowledged and MAC in unacknowledged modes. The MAC layer builds on top of an HARQwhich supports N -channel stop-and-wait3. It keeps a list of HARQ processes and manages the state forthem, e.g. which processes are available for transmission and which are awaiting feedback. For a moredetailed description of the system, see Chapter 2.

    5.1.5 Downlink Physical Channel

    The OFDM carriers are grouped in 81 subbands, on a 20 MHz bandwidth. The number of symbolsper subband is 128. In what concerns the downlink, 80 W are available at the base-stations (which can

    1With variance 8 dB and correlation distance of 100 m2See the Typical Urban Channel description in 3GPP (2004a)3With N=6 processes

  • 22 Simulations

    be distributed on the OFDM chunks by the link adaptation); the power is assumed to be 0.125 W forthe uplink. The frame period (or tti) is set to 6.66 ms. New channel estimations are available at everytti and the beam selection occurs every 10 ttis. Link adaptation specifications can be found in Ruberg(2006): available modulation formats are QPSK, 16 QAM and 64 QAM ; subbands assigned to a givenuser have the same modulation format4 and one code rate is chosen for the whole block of bits sent tothe user.

    5.1.6 Traffic

    In this work, we investigate three types of traffic. First, the system is analyzed when the buffers arealways full at the MAC layer. This approach does not require any high level traffic generation. Then weinvestigate services based on the TCP protocol (web browsing, file transfer). The simulation environmentpossesses a detailed implementation of TCP and HTTP. Standard parameters are used. Finally, a realtime traffic (VoIP) is studied, based on the UDP protocol. Again, standard settings are used. Figure 5.1gives a synthesis of all the blocks implemented in the simulator.

    Limited Internal 2005-10-315

    Network uplinkBaseBeamuplinkBaseBeamdownlinkBaseBeamdownlinkBaseBeam

    User TrafficModel

    MobileStation

    RadioConnection

    networkIp

    downlinkChannel

    terminalIp

    networkRlc terminalRlc

    uplinkChannel

    downlinkMac uplinkMac

    downlinkBaseBeam uplinkBaseBeam

    downlinkBand uplinkBand

    downlinkMobileBeam uplinkMobileBeammover

    … …

    Figure 5.1: Simulation environment overview

    4The gain introduced when assigning a different modulation format for each subband independently has been proved tobe marginal, see Ruberg (2006)

  • 5.2 Scenario 0: Fully Loaded System 23

    5.2 Scenario 0: Fully Loaded System

    Even though a fully loaded system is a theoretical scenario, it allows to validate the scheduling algorithmswe proposed. In fact, buffers are always full at the MAC layer5 at each time slot, so that the behaviorof our scheduling algorithms can be isolated. The number of users in the system is set to 100, and thesimulation time is 400 s. In what concerns the representation of the results, the bit-rates achieved foreach user during the last second are taken as samples. As a consequence, we have 40000 samples perFC-ID. The AR-process used in algorithms FT, PF, ER and ER2 to store the user throughputs is set tohave a 300 ms buffer; the one used by the BE Inter-FC-ID scheduler has a 1 s memory. The unit usedin the exponential part of the ER and ER2 rules is millisecond.

    The study of this scenario is divided in two parts. First, in Section 5.2.1, we validate the BE Inter-FC-ID scheduler which allocates the resources dedicated to each FC-ID. For this purpose, users receivedata on two different FC-IDs.

    In a second step (Sections 5.2.2, 5.2.3 and 5.2.4), we analyze different BE Intra-FC-ID schedulerswith only one FC-ID. We propose an analysis of the users’ and network’s perspective respectively. Wealso discuss which algorithm proposes the best trade off.

    5.2.1 Quality of Service Perspective

    As previously mentioned, this Section validates the BE Inter-FC-ID scheduler. Each user has two BestEffort data flows, corresponding to Committed Rates of 70 % for FC-ID 1, and 30 % for FC-ID 2 (thePeak Rate is set to 100 % in both cases for simplicity). In order to test the mechanism described inSection 4.3.2, we emulate a time-varying GBR traffic thereby giving a varying percentage of the resourcesto the BE flow6. Note that even if not relevant for the following analysis, the BE Intra-FC-ID schedulerused for this study is a Proportional Fair.

    Figure 5.2 shows the fraction of scheduled information bits (FSB) of these 3 FCIDs on a snapshotof 90 s. To keep it simple, we first verify the FSBs of the BE traffics when the GBR takes no resource:this is the case at time t = 375 s. We can verify that FC-ID 1 sees exactly 70 % for its FSB. At timet = 426 s, the GBR flow is taking 48.15 % of the resources, whereas the BE FC-ID 1 and 2 get 36.3 %and 15.55 % of the resources respectively, which equal 70.01 % and 29.99 % of the resources left for theBE traffic. The average FSBs for these two BE classes on the total simulation time are exactly 70 % and30 % respectively.

    The mechanism proposed to fulfill the QoS policy profiles is thus robust. Furthermore, its simplicitymakes it attractive for practical implementations.

    In Section 5.2.2, 5.2.3 and 5.2.4, we simulate traffic belonging to the same FC-ID and analyze theimpact of the different BE Inter-FC-ID schedulers on the user and network throughputs.

    5.2.2 User Perspective

    Figure 5.3 shows the cumulative distribution function (cdf) of the user throughput for the proposedscheduling methods whereas Table 5.1 synthesizes the most important percentiles of the cdf.

    The results show that the fairness determines the slope of the cdf of the user throughput, and whilefair throughput strategies have a very steep distribution, fair resources strategies have a less steep cdfbecause users with better average radio conditions better utilize the same amount of resources.

    From Figure 5.3, we see that our Fair Throughput algorithm outperforms all other algorithms for thefirst 40 % of users. Its 10th and 90th percentiles are 0.85 Mbps and 1.70 Mbps respectively (see Table5.1), i.e. 80 % of the users fall into this interval. A drawback though, is that by offering equal data ratesfor all users, the algorithm restrict users having good radio conditions to obtain the same throughput asusers experiencing a bad channel.

    5Number of bits waiting to be transfered is set to Infinity6The emulated GBR traffic resource is limited to 40

    81

  • 24 Simulations

    Figure 5.2: FSBs for 2 BE FC-IDs and an emulated GBR class. The resources taken by the GBR trafficare generated randomly but limited to 4081 . The 2 BE FC-IDs have Committed Rates of 70 % and 30 %respectively.

    Table 5.1: Achievable data rates for different schedulers and percentiles. Note that the xth percentile isdefined as the data rate achieved by 100− x % of the users.

    Scheduler Percentiles (Mbps)

    Max SIRFTPFERER2

    10th 50th 90th 99th

    0 0.02 15.78 64.010.85 1.12 1.70 2.810.41 1.35 5.76 12.690.43 1.24 5.60 12.780.17 0.93 8.01 25.39

  • 5.2 Scenario 0: Fully Loaded System 25

    ER and PF rules differ only at very low percentiles and converge at the 10th. In other words, theexponential part of the ER rule has an impact on very few users. A direct consequence is that the PFrule is slightly more fair from percentile 20 to 90 (100 kbps in average), but when considering data ratesin the Mbps range, the price to pay could be affordable.

    The ER2 rule gives better throughputs for only 30 % of the users, compared to the PF and ER rules.This permits to obtain a 90th percentile of 8 Mbps, which is 2.25 Mbps better than the ER rule.

    Furthermore, the Max SIR cdf has a much lower slope due to the unfairness in the resource distri-bution, and the users on the high percentile (e.g. 99th) of the cdf achieve very large throughputs (e.g.64 Mbps) at expenses of the throughput obtained by the users in the tail of the cdf (the 10th percentileis 0 Mbps). Note to conclude that 15 % of the samples are null, i.e. the probability to not get any dataduring one second is high.

    5.2.3 Network Perspective

    Table 5.2 presents the cell throughput statistics for the evaluated schedulers7. As expected, the MaxSIR algorithm outperforms all other methods, because priority is given to users having the best channelconditions. The Fair Throughput algorithm is the worst approach from this perspective: with an averagecell throughput of 13.5 Mbps, it reaches only 27 % of the Max SIR capacity. By reaching 52 % of theMax SIR capacity, the Proportional Fair performs better. In what concerns the ER rule, we see thatthere is no major drawback in terms of network throughput (ER rule is 0.8 Mbps worse on averagecompared to the PF rule), compared to the PF rule. To conclude, the ER2 rule constitutes the best’feasible’8 scheduler, proposing a 66 % fraction of the Max SIR throughput.

    5.2.4 Discussion

    As mentioned earlier, the Max SIR algorithm is presented here for benchmarking purpose only. It doesnot represent a satisfactory approach in terms of users perception. The ER rule performs slightly betterthan the PF rule for the 10 % worst samples and no major drawback is observed in terms of networkcapacity. However, its complexity is higher, as the method requires the computation of an averagewaiting time, and an exponentiation. The Fair Throughput scheduler permits to give guarantees interms of fairness between users, at the price of very low network throughput. Its complexity is low, asit only requires the sorting of the users as a function of their throughputs. Giving better results thanthe PF and ER rule for 30 % of the samples, but suffering from a highest complexity, the ER2 scheduleryields a better network capacity.

    Depending of the mobility of the users, different alternatives are possible. In fact, if the environmentis static9, it may be desirable to obtain guaranteed fairness. For example, two users paying the sameamount of money to get the Internet through a residential gateway, even though experiencing differentradio conditions, may expect to have the same type of performances. In such a scenario, a fair algorithmis needed. In the case of a highly mobile environment, where users have a high probability to get a goodchannel quickly, more radio oriented algorithms could be considered. To conclude, in order to handle awide range of mobility scenarios, methods prioritizing radio conditions and user experience at the samelevel are viewed as the best tradeoff. Because the ER rule proposes performances similar to the PFalgorithm, it is not analyzed in the next two scenarios.

    7Note that in this project, the cell throughput is defined as the number of information bits which are effectively decoded,i.e. the proportion of retransmission bits is subtracted

    8By feasible, we mean a practical scheduler. Note for example that MaxSIR is characterized by a 15 % chance to notget any data during 1 s

    9Portable Hot Spots (or residential Gateway), which connect to an operator network, and allow wireless Internet tousers in a certain radius, are being introduced. See for example Pogue (2006)

  • 26 Simulations

    Figure 5.3: CDF of the user throughput distribution for the proposed scheduling methods. Each samplerepresents 1 second of one of the 100 users in the system. Simulation duration is 400 s.

    Table 5.2: Cell throughput for different schedulers. The cell throughput is effective, i.e. correspond to thenumber of information bits which have been effectively decoded. Average is taken over time (simulationtime is 400 s)

    Scheduler Cell Throughput(Mbps)

    Max SIR 49.7FT 13.5PF 26ER 25.2ER2 33.1

  • 5.3 Scenario 1: File Transfer 27

    5.3 Scenario 1: File Transfer

    In this scenario, users entering the system request and download a 10 MB file and then exit. Thesimulation time is 20 min. Here, traffic belonging to a single FC-ID is simulated. We study the reactionof the system for different inter-arrival times. Figure 5.4 depicts the distribution of the inter-arrival timesfor different values of the intensity according to the following formula:

    − log(a)intensity

    where a is uniformly distributed between 0 and 1. This is the function used to model the arrival of theusers.

    Figure 5.4: CDF of the of the inter-arrivals time (x) for different values of the intensity.

    In what concerns the representation of the results, the measurements are done at the applicationlayer. In other words, we take into account the data rates experienced by the users to download thewhole file. The AR-process memory used in algorithms FT, PF, ER2 to store the user throughputs isset to 1000 ms, and the unity used in the exponential part of the ER2 rule is millisecond.

    In what follows, we investigate the impact of different Best Effort Intra-FC-ID schedulers on somenetwork metrics (cell throughput, link utilization - see Section 5.3.1) and the distribution of the userdate rates (fairness, mean throughput - see Section 5.3.2). To conclude, Section 5.3.3 discusses strengthsand weaknesses of the evaluated algorithms for this service.

  • 28 Simulations

    5.3.1 Network Perspective

    Figure 5.5 and Figure 5.6 present the cell throughput and the link utilization respectively, as a functionof the arrival intensity.

    Because the FT rule tries to be fair to all the users in the system at a given point in time, it tends tobe highly loaded quickly. In fact, the number of users experiencing a bad SIR increase with the arrivalrate but the algorithm still tries to satisfy everybody. As a consequence, adding more users in the systemdoes not influence the network throughput. Furthermore, even though the arrival rate is increasing, theTCP protocol adapts the rate and thus, adding users in the system could have virtually no effect on theoffered load. Th effect is that each user gets a lower thoughput with an increasing number of users.

    On the other hand, the MaxSIR, PF and ER2 algorithms see their cell throughputs increasing withthe arrival rate. Logically, the MaxSIR is the algorithm giving the highest cell throughput, especiallyat high load. In fact, by selecting the users experiencing a good SIR only, this approach is efficient interms of link utilization and powerful in terms of throughput. Note that the ER2 and PF rules performalmost as good as the Ma