144
PERF CALL Ebere A FORMANCE EVALUATION OF DYNAMIC L ADMISSION CONTROL ALGORITHM FO BASED 3G NETWORKS Omeje Digitally Signed by: Conte DN : CN = Webmaster’s n O= University of Nigeria, OU = Innovation Centre FACULTY OF ENGINEERI DEPARTMENT OF ELECTR ENGINEERING ANANA, IKPONGAKARASE PG/MENGR/14/68105 i C PRIORITY OR WCDMA ent manager’s Name name Nsukka ING RONIC JAMES

DEPARTMENT OF ELECTRONIC ENGINEERING · PDF fileiii approval page performance evaluation of dynamic priority call admission control algorithm for wcdma based 3g networks by anana,

  • Upload
    lamphuc

  • View
    216

  • Download
    2

Embed Size (px)

Citation preview

PERFORMANCE EVALUATION OF DYNAMICCALL ADMISSION CONTROL ALGORITHM

Ebere Omeje

ANANA, IKPONGAKARASE JAMES

PERFORMANCE EVALUATION OF DYNAMICCALL ADMISSION CONTROL ALGORITHM FOR WCDMA

BASED 3G NETWORKS

Ebere Omeje Digitally Signed by: Content manager’s Name

DN : CN = Webmaster’s name

O= University of Nigeria, Nsukka

OU = Innovation Centre

FACULTY OF ENGINEERING

DEPARTMENT OF ELECTRONIC ENGINEERING

ANANA, IKPONGAKARASE JAMESPG/MENGR/14/68105

i

PERFORMANCE EVALUATION OF DYNAMIC PRIORITY FOR WCDMA

Digitally Signed by: Content manager’s Name

DN : CN = Webmaster’s name

O= University of Nigeria, Nsukka

FACULTY OF ENGINEERING

DEPARTMENT OF ELECTRONIC

ANANA, IKPONGAKARASE JAMES

ii

PERFORMANCE EVALUATION OF DYNAMIC

PRIORITY CALL ADMISSION CONTROL ALGORITHM FOR WCDMA BASED 3G NETWORKS

BY

ANANA, IKPONGAKARASE JAMES PG/MENGR/14/68105

DEPARTMENT OF ELECTRONIC ENGINEERING FACULTY OF ENGINEERING

UNIVERSITY OF NIGERIA, NSUKKA

AUGUST, 2016

iii

APPROVAL PAGE

PERFORMANCE EVALUATION OF DYNAMIC PRIORITY CALL ADMISSION CONTROL

ALGORITHM FOR WCDMA BASED 3G NETWORKS

BY

ANANA, IKPONGAKARASE JAMES

(PG/M.ENG/14/68105)

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE

AWARD OF MASTER OF ELECTRONIC ENGINEERING (TELECOMMUNICATION

OPTION) IN THE DEPARTMENT OF ELECTRONIC ENGINEERING, UNIVERSITY OF

NIGERIA, NSUKKA.

ANANA, IKPONGAKARASE JAMES SIGNATURE____________DATE__________

(STUDENT)

PROF. C. I. ANI SIGNATURE____________DATE__________

(SUPEVISOR)

EXTERNAL EXAMINER SIGNATURE____________DATE__________

DR, M. A. AHANAEKU SIGNATURE____________DATE__________

(HEAD OF DEPARTMENT)

PROF. E. S. OBE SIGNATURE____________DATE__________

(CHAIRMAN, FACULTY

POSTGRADUATE COMMITTEE)

iv

CERTIFICATION

This is to certify that ANANA, IKPONGAKARASE JAMES, a postgraduate student in the

department of Electronic Engineering with registration number PG/M.ENG/14/68105 has

satisfactorily completed the requirement of the course and research work for the degree of Master

in Engineering in Electronic Engineering.

_______________________ _____ __________________________

PROF. C. I. ANI DR. M. A. AHANAEKU

(SUPERVISOR) (HEAD OF DEPARTMENT)

_____________________________________

PROF. E. S. OBE

(CHAIRMAN, FACULTY POSTGRADUATE COMMITTEE)

v

DECLARATION

I ANANA, IKPONGAKARASE JAMES a postgraduate student in the Department of Electronic

Engineering with Registration number PG/MENGR/14/68105 declare that the work contained in

this report is original and has not been submitted in part or in whole for any other degree of this or

any other institution.

_____________________________ __________________

ANANA, IKPONGAKARASE JAMES DATE

vi

DEDICATION

This research work is dedicated to the Almighty God the giver of life and wisdom and my Father

Elder Engr. J. U. Anana the first Communication Engineer known to me.

vii

ACKNOWLEDGEMENT

I acknowledge the Almighty God for the successful completion of this research work. I acknowledge my supervisor Prof C. I. Ani who is the head of department, for his meticulous supervision to ensure this work emerges with outstanding excellence. I also acknowledge the lecturers and entire staff of Electronic Engineering department for their contribution to success of this work. To my parents, very dear sisters and brothers especially Mrs. Inemesit Okoro, who contributed and gave their unending support to ensure that this work is completed timely I acknowledge you. To you my good friends who contributed by way of sourcing materials and giving a helping hand even through prayers to make this work a reality God bless you real good.

viii

ABSTRACT

The wideband code division multiple access (WCDMA) based 3G cellular mobile wireless networks is expected to provide diverse range of multimedia services to mobile users with guaranteed quality of service (QoS). In order to provide the diverse quality of service required by the users of these networks, an effective radio resource management (RRM) is necessary. Radio resource management is responsible for the efficient and optimal utilization of network resources while providing QoS guarantees to various applications. Call admission control is a form of radio resource allocation scheme used for QoS provisioning in a network, which restricts access to the network based on resource availability, in order to prevent network congestion and consequent service degradation. This research focuses on how to maintain service continuity with quality of service guarantees and provide service differentiation to mobile user’s traffic profile by efficiently utilizing system resources. The services are divided into four traffic classes’ handoff real-time, handoff non-real time, new call real-time and new call non-real-time respectively, giving higher priority to handoff traffic classes. It uses an algorithm referred to as dynamic prioritized uplink call admission control (DP-CAC), an efficient tool that provides better performance for WCDMA based 3G network. Beyond system utilization, revenue and grade of service as the key performance indicators, this research work also considers the queuing delay and the call blocking/dropping probability of each traffic class. From the simulation results and analysis it is discovered that the new call non-real-time traffic class experiences greater queuing delay of 1.42E-11 at increasing traffic intensity compared to other traffic classes in the system. It is also discovered that at peak traffic intensity of 3.60E+03 handoff RT has a probability of 1.59E-02, handoff NRT a probability of 1.69E-02, new call RT a probability of 2.00E-02 and new call NRT a probability of 2.10E-02 showing that call blocking/dropping probability of handoff and new calls at high traffic condition is minimized. This is achieved because the model dynamically switches handoff traffic to its reserved channel, and allows new calls to go through the general server thereby providing service continuity to handoff traffic and fairness to new call traffic classes respectively.

ix

TABLE OF CONTENT

Title Page i

Approval Page ii

Declaration iii

Dedication iv

Acknowledgement v

Abstract vi

Table of Content vii

List of Figures x

List of Tables xii

Acronyms xiii

CHAPTER ONE - INTRODUCTION

1.0 Background of Study 1

1.1 Problem Statement 6

1.2 Objectives of Study 7

1.3 Scope of Study 8

1.4 Methodology 8

1.5 Significance of the Study 9

1.6 Dissertation Outline 9

CHAPTER TWO - LITERATURE REVIEW

2.0 Evolution of Cellular Network 10

2.1 First Generation (1G) Networks 10

2.1.1 Physical Architecture 11

2.1.2 Technology 12

2.1.3 Modulation 12

x

2.1.4 Protocol 13

2.2 Second Generation (2G) Network 14

2.2.1 Frequency of Operation 14

2.2.2 Technology 15

2.2.3 Modulation 16

2.2.4 GSM System Physical Architecture 18

2.2.5 GSM Protocol Architecture 24

2.3 High Speed Circuit Switched Data 28

2.4 Packet Digital Cellular Systems 2.5G 29

2.4.1 GPRS Architecture 31

2.4.2 GPRS Protocol Architecture 33

2.5 Enhanced Data Rates for GSM Evolution (EDGE) 35

2.6 Third Generation Cellular Network (3G) 36

2.6.1 UMTS Radio Interface 38

2.6.2 UMTS Architecture 39

2.6.3 Universal Terrestrial Radio Access Network (UTRAN) 44

2.6.3.1 Node B 46

2.6.3.2 The Radio Network Controller 46

2.6.4 UMTS Core Network 50

2.6.5 UMTS Interfaces 53

2.6.6 UMTS Radio Interface Protocol Architecture 54

2.6.6.1 Layer 1 55

2.6.6.2 Layer 2 56

2.6.6.3 Layer 3 59

2.7 WCDMA Concepts 60

2.7.1 Power Control 63

2.7.2 Handoff 65

2.7.3 Channelization Codes 69

2.7.4 Scrambling Codes 69

2.7.5 Code Allocation 70

2.8 Radio Resource Management 71

xi

2.8.1 Resource Allocation 74

2.8.1.1 Methods of Resource Allocation 75

2.8.2 Radio Resources 76

2.8.2.1 Types of Radio Channels 77

2.9 Call Admission Control 82

2.9.1 CAC Design Considerations 83

2.9.2 Multiple Service Types 84

2.10 Related Works 85

CHAPTER THREE – RESEARCH METHODOLOGY

3.0 Adopted Network 88

3.1 Adopted Network Architecture 90

3.2 Physical Model 92

3.3 DP-CAC Algorithm 94

CHAPTER FOUR – SIMULATION RESULT

4.0 Simulation Model 98

4.1 Results and Discussion 105

CHAPTER FIVE – CONCLUSION AND RECCOMENDATION

5.0 Conclusion 118

5.1 Contribution to Knowledge 119

5.2 Recommendation 119

References

xii

LIST OF FIGURES

Figure 2.1: Functional architecture of a GSM system 18

Figure 2.2: GSM interfaces 23

Figure 2.3: Protocol architecture for signaling in GSM 24

Figure 2.4: GPRS architecture reference model 32

Figure 2.5: GPRS transmission plane protocol reference model 34

Figure 2.6: The UMTS physical architecture 40

Figure 2.7: UMTS network domain 41

Figure 2.8: UTRAN architecture 45

Figure 2.9: logical role of RNC 50

Figure 2.10: UMTS core network architecture 51

Figure 2.11: UMTS radio interface protocol architecture 54

Figure 2.12: Hard handoff procedure 67

Figure 2.13: Soft handoff procedure 68

Figure 2.14: Spreading and scrambling 70

Figure 3.1: Multimedia Services 89

Figure 3.2: WCDMA Network Architecture 91

Figure 3.3: DP-CAC Physical Model 93

Figure 3.4: Node B System Model 94

Figure 3.5: Flow-chart for DP-CAC Algorithm 97

Figure 4.1: Real Time Traffic Source (Voice) 99

Figure 4.2: Non-Real Time Traffic Source (Data) 100

Figure 4.3: Real Time Traffic Source (Video) 100

Figure 4.4: Calls Arriving from Respective Sources 101

xiii

Figure 4.5: Flow of Traffic to DP-CAC Switch 102

Figure 4.6: Computational Module 103

Figure 4.7: Decision Making Model 103

Figure 4.8: Complete Simulation Model 104

Figure 4.9: Graph of system capacity utilization (revenue) against offered traffic 106

Figure 4.10: Comparison between system capacity utilization (revenue) with DP-CAC Algorithm and without DP-CAC Algorithm 108

Figure 4.11: Graph of grade of service against offered traffic 109

Figure 4.12: Comparison between grade of service Performance with DP-CAC Algorithm and without DP-CAC Algorithm 110

Figure 4.13: Graph showing queuing delay against traffic intensity for each traffic class 111

Figure 4.14: Graph of call blocking and dropping probability against traffic Intensity for handoff and new calls respectively 112

Figure 4.15: Graph of call blocking and dropping probability against Traffic Intensity

for the respective call classes 113

Figure 4.16: Graph of call blocking and dropping probability at a server capacity of 24 channels 115

Figure 4.17: Graph of call blocking and dropping probability at a server capacity of 12 Channels 116 Figure 4.18: Graph of call blocking and dropping probability at a server capacity of

6 channels 117

xiv

LIST OF TABLES

Table 2.1: Differences between WCDMA and GSM air interfaces 62

Table 3.1: Service Priority Classes 92

Table 3.2: Computation Parameters 95

Table 3.3: Traffic Model 95

Table 3.4: Performance Measures 96

xv

ACRONYMS

AMPS Advanced Mobile Phone System

BCCH Broadcast Channel

BCH Broadcast Channel

BER Bit Error Rate

BMC Broadcast/Multicast Control Protocol

BoD Bandwidth on Demand

BPSK Binary Phase Shift Keying

BS Base Station

BSS Base Station Subsystem

BSC Base Station Controller

CAC Call Admission Control

CB Cell Broadcast

CBC Cell Broadcast Center

CBS Cell Broadcast Service

CCCH Common Control Channel

CCH Common Transport Channel

CDMA Code Division Multiple Access

CDPD Cellular Digital Packet Data

CM Connection Management

CN Core Network

CPCH Common Packet Channel

CPICH Common Pilot Channel

CRC Cyclic Redundancy Check

CRNC Controlling RNC

xvi

CS Circuit Switched

CTCH Common Traffic Channel

DCA Dynamic channel allocation

DCCH Dedicated Control Channel

DCH Dedicated Channel

DECT Digital Enhanced Cordless Telephone

DL Downlink

DNS Domain Name Service

DP-CAC Dynamic Priority Call Admission Control

DPCCH Dedicated Physical Control Channel

DPDCH Dedicated Physical Data Channel

DRNC Drift RNC

DS-CDMA Direct Spread Code Division Multiple Access

DSCH Downlink Shared Channel

DSL Digital Subscriber Line

DTCH Dedicated Traffic Channel

EDGE Enhanced Data Rates For GSM Evolution

EFR Enhance Full Rate

ETSI European Telecommunications Standards Institute

FACH Forward Access Channel

FDD Frequency Division Duplex

FDMA Frequency Division Multiple Access

FTP File Transfer Protocol

GERAN GSM/EDGE Radio Access Network

GGSN Gateway GPRS Support Node

xvii

GMSC Gateway MSC

GPRS General Packet Radio Service

GPS Global Positioning System

GSM Global System for Mobile Communication

HARQ Hybrid Automatic Repeat Request

HLR Home Location Register

HSDPA High Speed Downlink Packet Access

HSUPA High Speed Uplink Packet Access

IM Interference Margin

IMS IP Multimedia Sub-System

IMSI International Mobile Subscriber Identity

IP Internet Protocol

ISDN Integrated Services Digital Network

ITU International Telecommunications Union

Iu BC Iu Broadcast

JTACS Japanese Total Access Communication Systems

LAI Location Area Identity

LAN Local Area Network

LM Load Margin

MAC Medium Access Control

MAI Multiple Access Interference

ME Mobile Equipment

MGW Media Gateway

MIMO Multiple Input Multiple Output

MM Mobility Management

xviii

MMS Multimedia Message

MS Mobile Station

MSC/VLR Mobile Services Switching Centre/Visitor Location Register

NAS Non Access Stratum

NBAP Node B Application Part

NMT Nordic Mobile Telephone

NRT Non-Real Time

O&M Operation and Maintenance

OSS Operations Support System

OVSF Orthogonal Variable Spreading Factor

PC Power Control

PCCCH Physical Common Control Channel

PCCPCH Primary Common Control Physical Channel

PCH Paging Channel

PCPCH Physical Common Packet Channel

PCS Persona Communication Systems

PDC Personal Digital Cellular

PDCP Packet Data Convergence Protocol

PDP Packet Data Protocol

PDSCH Physical Downlink Shared Channel

PDU Protocol Data Unit

PHY Physical Layer

PI Page Indicator

PICH Paging Indicator Channel

PLMN Public Land Mobile Network

xix

PRACH Physical Random Access Channel

PS Packet Switched

PSTN Public Switched Telephone Network

QAM Quadrature Amplitude Modulation

QoS Quality of Service

QPSK Quadrature Phase Shift Keying

RAB Radio Access Bearer

RACH Random Access Channel

RAI Routing Area Identity

RAN Radio Access Network

RANAP RAN Application Part

RB Radio Bearer

RF Radio Frequency

RLC Radio Link Control

RNC Radio Network Controller

RNS Radio Network Sub-System

RNSAP RNS Application Part

RRC Radio Resource Control

RRM Radio Resource Management

RT Real Time

SAP Service Access Point

SCCP Signalling Connection Control Part

SCH Synchronisation Channel

SDU Service Data Unit

SEQ Sequence

xx

SF Spreading Factor

SGSN Serving GPRS Support Node

SHO Soft Handover

SIP Session Initiation Protocol

SIR Signal to Interference Ratio

SM Session Management

SMS Short Message Service

SN Sequence Number

SNR Signal to Noise Ratio

SRB Signalling Radio Bearer

SRNC Serving RNC

SRNS Serving RNS

SS7 Signalling System #7

TACS Total Access Communication System

TCH Traffic Channel

TCP Transport Control Protocol

TD/CDMA Time Division CDMA, Combined TDMA and CDMA

TDMA Time Division Multiple Access

TDD Time Division Duplex

TE Terminal Equipment

TF Transport Format

TFCI Transport Format Combination Indicator

TFCS Transport Format Combination Set

TFI Transport Format Indicator

TMSI Temporary Mobile Subscriber Identity

xxi

UDP User Datagram Protocol

UE User Equipment

UL Uplink

UM Unacknowledged Mode

UMTS Universal Mobile Telecommunication Services

URA UTRAN Registration Area

URL Universal Resource Locator

USIM UMTS Subscriber Identity Module

UTRA Universal Terrestrial Radio Access (3GPP)

UTRAN UMTS Terrestrial Radio Access Network

VoIP Voice over IP

VPN Virtual Private Network

WAP Wireless Application Protocol

WCDMA Wideband Code Division Multiple Access

WCDMA Wideband Code division multiple access

WWW World Wide Web

1

CHAPTER ONE

INTRODUCTION

1.0 Background of Study

For some years now, cellular telephony systems have been experiencing a level of unprecedented

growth in the world of telecommunications. When the first cellular technologies were brought into

service, at the beginning of the 1980s, there was a rather slow take-off in the number of

subscribers, hardly presaging the subsequent spectacular growth [1, 2]. The slow take-off of

subscribers was however a result of incompatibility between the systems, and major differences in

the use of the radio segment [2, 3]. Unfortunately, travelers who go to countries where the

technology offered by their operator is not represented find themselves suddenly deprived of their

communication tool because the subscriber management is not at all the same on the different

systems [1, 2, 4, 5]. It was therefore imperative to have a unified standard which will address these

issues and this led to the evolution of the first generation analogue system and then to fourth

generation system referred to as the long term evolution system (LTE). The different evolutions of

the cellular network have their respective frequency of operation, modulation scheme, protocol of

operation, access mode technology, and physical architecture, but one common feature is the

signaling standard.

Signaling refers to the exchange of control information between components of a network

(telephones, switches) in order to establish, manage, and disconnect calls [2, 3, 5]. The purpose of

network signaling is to set up a circuit between the calling and called parties so that user traffic

(voice, fax, and analog dial-up modem, for example) can be transported bi-directionally. When a

circuit is reserved between both parties, the destination local switch places a ringing signal to alert

the called party about the incoming call. This signal is classified as subscriber signaling because it

2

travels between a switch (the called party's local switch) and a subscriber (the called party). A

ringing indication tone is sent to the calling party telephone to signal that the telephone is ringing.

If the called party wishes to engage the call, the subscriber lifts the handset into the off-hook

condition. This moves the call from the set-up phase to the call phase [2, 3, 4, 5].

Signaling between mobile stations and the network require radio resources. Since large amount of

signaling traffic is being exchanged between the networks before communication can be

established, large amount of bandwidth and radio channels are also required [2, 3]. This high

demand for wireless communication services requires increased system capacities, the simplest

solution would be to allocate more bandwidth to these services, but the electromagnetic spectrum

is a limited resource, which is increasingly scarce [6, 9]. The radio resources such as radio

(frequency) spectrum and transmit powers are generally limited due to the physical and regulatory

restrictions, as well as the interference limited nature of wireless cellular networks [4, 6, 9]. If

these resources are not properly managed there will be increased interference in the systems which

will result to poor quality of service. Therefore, to provide communication services with high

capacity and good QoS, it is imperative to employ efficient and effective methods for sharing the

radio spectrum [8, 9, 12, 24].

Spectrum sharing methods are called multiple access techniques. Multiple access technique

involves radio channel allocation to users of the system. The objective of multiple access

techniques is to provide communication services with sufficient bandwidth when the radio

spectrum is shared with many simultaneous users. The most common multiple access techniques

are frequency division multiple access (FDMA), time division multiple access (TDMA), and code

division multiple access (CDMA) [4, 6, 7, 9]. FDMA was used in the first generation (1G) of the

cellular systems such as advanced mobile phone service (AMPS) systems which were basically

3

analog systems. TDMA enhances FDMA by further dividing the bandwidth into channels by the

time domain as well; TDMA is used as the access technology for global system for mobile

communications (GSM), which is representative of the second generation (2G) of cellular systems

[1, 2, 4, 9]. The digital transmission techniques of the 2G mobile radio networks have already

improved upon the capacity and voice quality attained by the analog mobile radio systems of the

first generation, however, more efficient techniques allowing multiple users to share the available

frequencies are necessary. Unlike FDMA and TDMA, CDMA transmission does not work by

allocating channels for each user, Instead, CDMA utilize the entire bandwidth for transmission of

each user [2, 4, 7, 9]. Therefore CDMA’s access method is achieved by assigning each user a

distinguished spreading code called chip code. This chip code is used to transform a user’s

narrowband signal to a much wider spectrum prior to transmission in a manner known as a spread

spectrum transmission. The enhanced CDMA access method which is known as the wideband

code division multiple access (WCDMA) is employed in universal mobile telecommunication

systems (UMTS) which is representative of third generation (3G) of cellular systems.

3G mobile communication systems evolved as a response to the challenge of developing systems

that increased the capacity of the existing 2G systems [4, 5, 10]. This required that the

infrastructure be designed so that it can evolve as technology changes, without compromising the

existing services on the existing networks. Separation of access technology, transport technology,

service technology and user application from each other make this demanding requirement

possible [4, 5, 15]. UMTS was developed as the migration of the European Telecommunications

Standards Institute (ETSI) 2G/2.5G systems GSM/GPRS (general packet radio services); with the

aim of facilitating as much as possible the extension of the existing networks of these worldwide

systems as well as the interoperability of the new UMTS system with the previous networks [4, 8,

4

11]. The limits of the GSM radio interface technology are some of the main reasons why the

decision was made to reconsider the definition of a new radio technology. Ultimately, within

UMTS, this decision led to the definition of the radio interface technology that we now know as

WCDMA, while keeping the core network similar to that existing in GSM/GPRS systems [8, 9,

11].

WCDMA introduces a significant degree of complexity in the design and operation of the radio

interface, as it supports spectral efficiency, general quality of service parameters, multimedia

services and bit rate up to 2Mbps which are its distinct characteristics when compared to existing

radio interface technologies [8, 11, 12]. Spreading is the process fundamental to the operation of

the WCDMA interface particularly the direct sequence CDMA, in direct sequence spreading; the

information signal is multiplied by a high frequency signature sequence, also known as a spreading

code or spreading sequence which increases the narrow bandwidth of the user to a wider

bandwidth [8, 10, 11]. In WCDMA, all users transmit in the same frequency band in an

uncoordinated fashion, referred to as an asynchronous transmission scenario which imposes time

offsets that result in multiple access interference in the uplink, while multipath interference is due

to the different arrival times of the same signal via the different paths at the receiver and is present

in both uplink and downlink [8, 11, 12]. As the number of users’ increases, the multiple access

interference increases too, thus, the capacity of WCDMA is known to be interference limited as it

is capable of accommodating additional users at the expense of a gradual degradation in

performance of the system in a fixed bandwidth [4, 8, 11].

The performance of WCDMA based cellular network depends largely on radio resource

management. Radio Resource Management (RRM) is a set of algorithms that control the usage of

radio resources which is located in the mobile terminal and the nodeB, in order to maximize the

5

overall system capacity, satisfy some predefined quality of service (QOS) requirement level to

different users according to their traffic profiles and provide optimum utilization of the system in

the cellular network [8, 9, 11]. RRM functions are realized through what is known as resource

allocation (RA), which determines how resources should be assigned by a nodeB to a mobile

subscriber. There are however factors that are affecting efficient resource allocation mechanisms

some of which are but not limited to error prone wireless channel, limited bandwidth and mobility

of mobile subscribers [4, 12]. Therefore, in order to study effective resource management

algorithms, it is necessary to understand and define the conditions that limit the cellular capacity;

these conditions are related to the service characteristics (voice, video, or data), the propagation

channel variations, the power control operation, and the user mobility patterns. The basic RRM

algorithms can be classified as follows: Handoff and mobility management algorithms, call

admission control (CAC) algorithms, and power control algorithms [9, 10, 11]. The call admission

control mechanism is an important component of RRM as it affects the resource allocation

efficiency and quality of service guarantees provided to users [9, 11]. The focus of this research is

on the performance evaluation of dynamic priority call admission control algorithm for WCDMA

based 3G networks.

6

Limitation of Cellular Radio Systems

Wireless cellular networks are relatively complex systems compared to the wired networks, hence

there are several factors which make it difficult to provide quality of service guarantees. These

include but are not limited to the following;

� Resources in cellular networks are very limited due to the limited radio frequency

spectrum.

� Limit to the maximum number of channels.

� Restriction to number of available channels that can be assigned to each cell.

� Wireless channels are inherently unreliable and prone to bursty errors due to noise,

multipath fading and interferences.

� Users tend to move around during a communication session causing handoffs between

adjacent cells, reduction in cell size to accommodate more users in a given area makes it

difficult to deal with the mobility related problems [4, 9, 12].

1.1 Problem Statement

The universal mobile telecommunication system (UMTS) is required to support a wide range of

applications (multimedia traffic) each with its own specific QoS requirement. Each traffic class has

its own application level QoS requirements in terms of delay, jitter, bit-error-rate (BER),

throughput and burstiness as well as call level requirements in terms of call blocking probability

for new calls and call dropping probability for handoff calls. This requires the media access control

(MAC) protocols and call admission control (CAC) mechanism to respectively enable application

level and call level performance guarantees for the traffic classes. This study focuses on call

admission control (CAC) with call level QoS guarantees.

7

1.2 Aim and Objectives

The general goal of emerging wireless cellular networks is to enable communication with a person,

at any time, at any place, and in any form. However, due to the distinct characteristics of this

network and the aforementioned limitations, the quality of service requirements, network

throughput and performance are often compromised. This study focuses on how to maintain

service continuity with quality of service guarantees and provide service differentiation to mobile

user’s traffic profile by efficiently utilizing system resources. It uses an algorithm referred to as

dynamic prioritized uplink call admission control (DP-CAC), an efficient tool that provides better

performance for WCDMA based 3G network and overcomes the shortcomings of complete

partitioning based algorithm. The objective of DP-CAC is outlined thus:

• At low and moderate traffic intensity, it ensures the optimum system utilization while QoS

is satisfied.

• At high traffic intensity, it ensures the fairness of resource usage amongst different traffic

class while maintaining QoS requirements.

• Support preferential treatment to higher priority calls by serving its queue first.

• To ensure best system utilization and revenue while satisfying the required QoS and

fairness.

• To minimize call dropping/blocking probability of handoff and new calls.

8

1.3 Scope of Study

This research work presents in detail the WCDMA 3G radio interface, its physical and protocol

architecture. It explains radio resource management and various resource allocation schemes, with

its focus on dynamic resource allocation strategies. The design considerations of call admission

control schemes and quality of service parameters are explored of which more attention is given to

dynamic priority call admission control scheme. The system model and simulation results are also

analyzed and presented.

1.4 Methodology

The methodology below was adopted to efficiently realize the objective of this research work;

A. Review of the Evolution of cellular networks to the third generation.

B. Detailed review of WCDMA radio interface technology for UMTS.

C. Review of Radio Resource Management and resource allocation schemes.

D. Review of existing call admission control schemes for WCDMA radio interface uplink.

E. Propose a call admission control scheme for WCDMA uplink radio interface following the

best admission schemes in review.

F. Develop an algorithm and a simulation model for the proposed admission scheme.

G. Simulate the model and obtain data using MATLAB.

H. Analyze Data using defined key performance indicators (KPI’s).

I. Computation of QoS parameter and other performance measures of the network using a

computational model.

J. Compare Performance of the proposed scheme with other existing schemes.

9

1.5 Significance of the Study

The significance of this study is based on the results from the simulation which indicate the

superiority of dynamic priority call admission control as it is able to achieve a better balance

between optimum system utilization, revenue, quality of service provisioning and fairness to all

traffic classes.

1.6 Dissertation Outline

The remaining part of this research work is organized thus: chapter two is a detailed review of the

evolution of cellular networks and radio resource management strategies, chapter three explains

the proposed scheme as well as it describes the system model, traffic model and performance

measures while chapter four discusses the results obtained from the simulation process and finally

the research work is concluded in chapter five.

10

CHAPTER TWO

LITERATURE REVIEW

2.0 Evolution of Cellular Network

In early networks, the emphasis was to provide radio coverage with little consideration for the

number of calls to be carried. As the subscriber base grew, the need to provide greater traffic

capacity had to be addressed. The cellular wireless generation (G) generally refers to a change in

the fundamental nature of the service, non-backwards compatible transmission technology, and

new frequency bands [1, 3]. New generations have appeared in every ten years, since the first

move from 1981-An analog (1G) to digital (2G) network. After that there was (3G) multimedia

support, spread spectrum transmission and 2011 all –IP Switched networks (4G) emerged. The last

few years have witnessed a phenomenal growth in the wireless industry, both in terms of mobile

technology and its subscribers [2]. There has been a clear shift from fixed to mobile cellular

telephony and new mobile generations do not pretend to improve the voice communication

experience. It tries to give the user access to a new global communication reality, whose aim is to

reach communication ubiquity (every time, everywhere) and to provide users with a new set of

services [3, 5, 2].

2.1 First Generation (1G) Networks

The first generation of cellular networks consisted of analog transmission systems for both voice

and data. A set of wireless standards were developed in the 1980's, these different standards were

used in various countries. Advanced Mobile Phone System (AMPS) AMPS, also known as IS-54,

on the 800MHz band, involves some 832 channels per carrier, and originated in the United States

[1, 4, 5]. Total Access Communication System (TACS) TACS operates in the 900MHz band,

offers 1,000 channels, and originated in the United Kingdom, Japanese Total Access

11

Communication Systems (JTACS) JTACS works in the 800MHz to 900MHz band, and it comes

from Japan. Nordic Mobile Telephone (NMT) the original variation of NMT was 450MHz,

offering some 220 channels, had a very large coverage area, thanks to its operation at 450MHz, but

the power levels are so intense that mobile sets were incredibly heavy [1, 2, 5]. NMT originated in

Denmark, Finland, Norway, and Sweden. Other 1G standards used in Europe include Germany and

Austria's C-Netz, Sweden's Comvik, NMT-F (France's version of NMT900), and France's

Radiocom 2000 (RC2000).

2.1.1 Physical Architecture

The analog cellular architecture comprises three components namely, the mobile unit, transceiver

station and mobile telephone switching office. The base transceiver station tower transmits signal

to and from the mobile unit and is connected to the mobile telephone switching office (MTSO)

through a microwave link or wire line [1, 2]. The MTSO interfaces into the terrestrial local

exchange to complete calls over the public switched telephone network (PSTN). When a mobile

unit is on, it emits two numbers consistently: the electronic identification number and the actual

phone number of the handset, which are picked up by the transceiver stations, and depending on

the signal level, they can determine whether the mobile unit is well within the cell or transitioning

out of that cell [1, 3, 5]. If the unit's power levels start to weaken and it appears that the unit is

leaving the cell, an alert is raised that queries the surrounding base transceiver stations to see

which one is picking up a strong signal coming in. As the unit crosses the cell perimeter, it is

handed over to an adjacent frequency in that incoming cell this process is known as handover. The

mobile unit cannot stay on the same frequency in between adjacent cells because that would create

co-channel interference (i.e., interference between cells) [1, 5].

12

2.1.2 Technology

The access mode technology used in the first generation analog system is the Frequency Division

Multiple Access (FDMA). FDMA is a process of allowing mobile stations to share radio frequency

allocation by dividing up that allocation into separate radio channels where each radio device can

communicate on a single radio channel during communication [1, 5]. A frequency band can be

divided into several communication channels using frequency division multiplexing (FDM). When

a device is communicating on an FDM system using a frequency carrier signal, its carrier channel

is completely occupied by the transmission of the device. For some FDM systems, after it has

stopped transmitting, other transceivers may be assigned to that carrier channel frequency [5].

When this process of assigning channels is organized, it is called frequency division multiple

access (FDMA). Transceivers in an FDM system typically have the ability to tune to several

different carrier channel frequencies [1, 2, 5]. 1G systems are circuit switched networks.

2.1.3 Modulation

Modulation is the process of changing the amplitude, frequency, or phase of a radio frequency

carrier signal (a carrier) to change with the information signal (such as voice or data). Mobile

systems use analog or digital modulation, but the first generation systems use analog modulation.

Analog modulation is a process where the amplitude, frequency or phase of a carrier signal is

varied directly in proportion or in direct relationship to the information signal [1, 3]. A voice call

gets modulated to a higher frequency of about 150MHz and up as it is transmitted between radio

towers.

13

2.1.4 Protocol

Cellular digital packet data (CDPD) is the protocol that was used 1G system. It is defined as a

connectionless, multiprotocol network service that provides peer network wireless extension to the

Internet [3, 5]. CDPD is a packet data protocol designed to work over AMPS. It was envisioned as

a wide area mobile data network that could be deployed as an overlay to existing analog systems.

And also a common standard that will take advantage of unused bandwidth in the cellular airlink,

that is, the wireless connection between the service provider and the mobile subscriber. Unused

bandwidth is a result of silence in conversations as well as the moments in time when a call is

undergoing handoff between cells. These periods of no activity can be used to carry data packets

and therefore take advantage of the unused bandwidth [1, 5].

CDPD provides mobile packet data connectivity to existing data networks and other cellular

systems without any additional bandwidth requirements. However, CDPD does not use the MSC

for traffic routing. The active users are connected through the mobile data base stations (MDBS) to

the Internet via intermediate systems (MD-IS) which act as servers and routers for the data user

[22].

Problems of 1G

� Roaming not supported

� Different standards, frequencies and frequency spacing.

� Security – eavesdropping was common place

� Difficult to expand

� Limited capacity

� Analogue systems – larger than required amount of the frequency had to be allocated to

each call

14

� Extremely slow data rates – only during silent period [1, 5].

2.2 Second Generation (2G) Network

The second generation (2G) of cellular technology was marked by a shift from analogue to digital

systems. Shifting to digital networks had many advantages; firstly, transmission in the digital

format aided clarity since the digital signal was less likely to be affected by electrical noise.

Secondly, transmitting data over digital network is much easier, data could also be compressed,

saving a lot of time, and finally with the development of new multiplexing access techniques, the

capacity of the cellular network could be increased [3, 4, 5].

The second generation standard known as Global system for Mobile communication (GSM) was

developed and published to provide a unified standard for cellular communication which will

address the incompatibility and roaming problems of the analogue systems [1, 3, 5]. The GSM

standard introduced the subscriber identity module (SIM) card, which held information about the

user and provided memory to store phone numbers and text messages. It could be shifted from one

handset to another, allowing users to choose handsets according to their fancy without having to

bother about the cellular service provider [1, 4, 5].

2.2.1 Frequency of operation

GSM systems operate in the 900MHz and 1800 GHz bands throughout the world with the

exception of the Americas where they operate in the 1900 GHz band [1, 5]. Cellular systems allow

reuse of the same channel frequencies many times within a geographic coverage area. The

technique called frequency reuse, which makes it possible for a system to provide service to more

customers called system capacity, by reusing the channels that are available in a geographic area

[1, 5, 7]. Two frequency bands 45 MHz apart have been reserved for GSM operation 890–915MHz

15

for transmission from the mobile station to the base station, that is, uplink, and 935–960MHz for

transmission from the base station, that is, downlink [5, 7]. Each of these bands of 25 MHz width

is divided into 124 single carrier channels of 200 kHz width. In each of the uplink/downlink bands

there remains a guard band of 200 kHz, of which each Radio Frequency Channel (RFCH) is

uniquely numbered, and a pair of channel with the same number forms a duplex channel with a

duplex distance of 45 MHz [1, 5].

2.2.2 Technology

The second generation network being digital systems, combine the Frequency Division Multiple

Access (FDMA) and Time Division Multiple Access (TDMA) technology. Time division multiple

access (TDMA) is a process of sharing a single radio channel by dividing the channel into time

slots that are shared between simultaneous users of the radio channel [3, 7]. When a mobile radio

communicates with a TDMA system, it is assigned a specific time position on the radio channel.

By allowing several users to use different time positions (time slots) on a single radio channel,

TDMA systems increase their ability to serve multiple users with a limited number of radio

channels. For example if we have a given bandwidth, FDMA could divide this bandwidth into 200

carrier frequency bands, TDMA will further divide each frequency band into eight (8) time slots,

allowing eight subscribers to utilize a single frequency band using their respective time slots

simultaneously, thereby increasing the number of subscribers to 1600 for the entire allocation. 2G

still utilizes the circuit switched technology [1, 3, 5, 7].

While GSM technology was developed in Europe, Code Division Multiple Access (CDMA)

technology was developed in North America. CDMA uses spread spectrum technology to break up

speech into small, digitized segments and encodes them to identify each call as well as

16

distinguishes between multiple transmissions carried simultaneously on a single wireless signal. It

carries the transmissions on that signal, freeing network room for the wireless carrier and

providing interference-free calls for the user [5, 7].

2.2.3 Modulation

The modulation technique used in second generation systems is digital modulation. Digital

modulation is a process where the amplitude, frequency or phase of a carrier signal is varied by the

discrete states (on and off) of a digital signal. Amplitude Shift Keying modulation turns the carrier

signal on and off with the digital signal [2, 4, 5]. Frequency Shift Keying modulation shifts the

frequency of the carrier signal according to the on and off levels of the digital information signal.

The phase shift modulator changes the phase of the carrier signal in accordance with the digital

information signal. The standard modulation scheme used in 2G cellular network is Gaussian

Minimum Shift Keying (GMSK). Gaussian Minimum Shift Keying (GMSK) has advantages of

being able to carry digital modulation while still using the spectrum efficiently. One of the

problems with other forms of phase shift keying is that the sidebands extend outwards from the

main carrier and these can cause interference to other radio communications systems using nearby

channels [2, 6]. In view of the efficient use of the spectrum in this way, GMSK modulation has

been used in a number of radio communications applications and is possibly the most widely used

is the GSM cellular technology.

17

Protocol

CDPD protocol was also in use in the second generation network as data transmission method did

not change in the digital shift [1, 5, 22].

2G Services – text messages, picture messages, Fax

Improvements

� Supports roaming through its unified standard.

� Allows for much greater penetration intensity.

� Higher spectrum efficiency than analog systems.

� It holds sufficient security for both the sender and the receiver.

� All text messages are digitally encrypted. This digital encryption allows for the transfer of

data in such a way that only the intended receiver can receive and read it

� Provides voice clarity and reduces noise in the line.

� Digital signals are considered environment friendly.

� Digital encryption has provided secrecy and safety to the data and voice calls.

� Greater network capacity.

� Fewer dropped calls [2, 3, 5, 7].

Limitation

2G systems are still circuit switched networks like the 1G systems and have slow data rate of 9.6

kbps which is not suitable for web browsing and multimedia applications [2, 3, 5].

2.2.4 GSM System Physical Architecture

GSM 2G cellular networks has a hierarchical, complex system architecture comprising many

entities, interfaces, and acronyms which co

(RSS), the network and switching subsystem (NSS), and the operation subsystem (OSS)

Each subsystem will be discussed in more detail in the following sections. Generally, a GSM

customer only notices a very small fraction of the whole network

some antenna masts of the base transceiver stations (BTS

Figure 2.1: Functional architecture of a GSM system

Radio subsystem: The radio subsystem (RSS) comprises all radio specific entities, i.e., the mobile

stations (MS) and the base station s

GSM System Physical Architecture

GSM 2G cellular networks has a hierarchical, complex system architecture comprising many

entities, interfaces, and acronyms which consists of three main subsystems. The

itching subsystem (NSS), and the operation subsystem (OSS)

Each subsystem will be discussed in more detail in the following sections. Generally, a GSM

customer only notices a very small fraction of the whole network – the mobile stations (MS) and

some antenna masts of the base transceiver stations (BTS).

Functional architecture of a GSM system [17]

The radio subsystem (RSS) comprises all radio specific entities, i.e., the mobile

stations (MS) and the base station subsystem (BSS). The RSS and the NSS are connected via the A

18

GSM 2G cellular networks has a hierarchical, complex system architecture comprising many

nsists of three main subsystems. The radio sub system

itching subsystem (NSS), and the operation subsystem (OSS) [6, 17].

Each subsystem will be discussed in more detail in the following sections. Generally, a GSM

the mobile stations (MS) and

The radio subsystem (RSS) comprises all radio specific entities, i.e., the mobile

ubsystem (BSS). The RSS and the NSS are connected via the A

19

interface and the connection to the OSS via the O interface. The A interface is typically based on

circuit-switched Pulse Code Modulation ( PCM) systems, whereas the O interface uses the

Signaling System No. 7 (SS7) based on X.25 carrying management data to/from the RSS [17].

Mobile station (MS): The MS comprises all user equipment and software needed for

communication with a GSM network [3, 4, 17]. An MS consists of user independent hard and

software and of the subscriber identity module (SIM), which stores all user-specific data that is

relevant to GSM. While an MS can be identified via the international mobile equipment identity

(IMEI), a user can personalize any MS using his or her SIM, of which, user-specific mechanisms

like charging and authentication are based on the SIM, not on the device itself. Device-specific

mechanisms, e.g., theft protection, use the device specific IMEI [3, 4, 17]. Without the SIM, only

emergency calls are possible. It also contains many identifiers and tables, such as card-type, serial

number, a list of subscribed services, a personal identity number (PIN), a PIN unblocking key

(PUK), an authentication key, and the international mobile subscriber identity (IMSI). The PIN is

used to unlock the MS while using the wrong PIN three times will lock the SIM, where in such

cases; the PUK is needed to unlock the SIM. The MS stores dynamic information while logged

onto the GSM system, such as, the cipher key and the location information consisting of a

temporary mobile subscriber identity (TMSI) and the location area identification (LAI) [3, 6, 7,

17]. Apart from the telephone interface, an MS can also offer other types of interfaces to users with

display, loudspeaker, microphone, and programmable soft keys, further interfaces comprise

computer modems, Bluetooth etc.

Base station subsystem (BSS): A GSM network comprises many BSSs, each controlled by a base

station controller (BSC). The BSS performs all functions necessary to maintain radio connections

20

to a mobile station (MS), coding/decoding of voice, and rate adaptation to/from the wireless

network part. Besides a BSC, the BSS contains several base transceiver stations (BTS) [3, 7, 17].

Base transceiver station (BTS): A BTS comprises all radio equipment, which are antennas, signal

processing transceivers, and amplifiers necessary for radio transmission. A BTS can form a radio

cell or, using sectorized antennas, several cells and is connected to mobile station via the Um

interface (ISDN U interface for mobile use), and to the BSC via the Abis interface. The Um

interface contains all the mechanisms necessary for wireless transmission (TDMA, FDMA), while

the Abis interface consists of 16 or 64 kbit/s connections. A GSM cell can measure between some

100 m and 35 km depending on the environment but also expected traffic [7, 17].

Base station controller (BSC): The BSC basically manages the BTSs. It reserves radio

frequencies, handles the handoffs from one BTS to another within the BSS, and performs paging

of the MS. The BSC also multiplexes the radio channels onto the fixed network connections at the

A interface [4, 6, 17].

Network and switching subsystem: The core of the GSM system is formed by the network and

switching subsystem (NSS). The NSS connects the wireless network with standard public

networks, performs handovers between different BSSs. It comprises functions for worldwide

localization of users and supports charging, accounting, and roaming of users between different

providers in different countries [3, 17]. The NSS consists of the following switches and databases:

Mobile services switching center (MSC): MSCs are high-performance digital integrated

switched digital network (ISDN) switches, that set up connections to other MSCs and to the BSCs

via the A interface. They form the fixed backbone network of a GSM system, and also manage

several BSCs in a geographical region [3, 6, 17]. An MSC handles all signaling needed for

21

connection setup, connection release and handover of connections to other MSCs. The standard

signaling system No. 7 (SS7) is used for this purpose. SS7 covers all aspects of control signaling

for digital networks, reliable routing and delivery of control messages, establishing and monitoring

of calls. An MSC also performs all functions needed for supplementary services such as call

forwarding, multi-party calls, reverse charging etc [3, 17].

A gateway MSC (GMSC) has additional connections to other fixed networks, such as PSTN and

ISDN. Using additional interworking functions (IWF), it can also connect to public data networks

(PDN) such as X.25 [4, 17].

Home location register (HLR): The HLR is the most important database in a GSM system as it

stores all user-relevant information and these user-specific information elements only exist once

for each user in a single HLR, which also supports charging and accounting. This comprises static

information, such as the mobile subscriber ISDN number (MSISDN), subscribed services (e.g.,

call forwarding, roaming restrictions, GPRS), and the international mobile subscriber identity

(IMSI) [3, 4, 17]. Dynamic information is also needed in the HLR, for instance the current location

area (LA) of the MS, the mobile subscriber roaming number (MSRN), the current VLR and MSC,

such that as soon as an MS leaves its current LA. The information in the HLR is updated, this

information is necessary to localize a user in the worldwide GSM network [6, 17]. HLRs can

manage data for several million customers and contain highly specialized data bases which must

fulfill certain real-time requirements to answer requests within certain time-bounds.

Visitor location register (VLR): The VLR associated to each MSC is a dynamic database which

stores all important information needed for the MS users currently in the LA that is associated to

the MSC (e.g., IMSI, MSISDN, HLR address). If a new MS comes into an LA the VLR is

responsible for, it copies all relevant information for this user from the HLR. This hierarchy of

22

VLR and HLR avoids frequent HLR updates and long-distance signaling of user information,

some VLRs in existence, are capable of managing up to one million customers [3, 17].

Operation subsystem: The third part of a GSM system, the operation subsystem (OSS), contains

the necessary functions for network operation and maintenance. The OSS possesses network

entities of its own and accesses other entities via SS7 signaling [4, 17]. The following entities have

been defined:

Operation and maintenance center (OMC): The OMC monitors and controls all other network

entities via the O interface (SS7 with X.25). Typical OMC management functions are traffic

monitoring, status reports of network entities, subscriber and security management, and accounting

and billing. OMCs use the concept of telecommunication management network (TMN) as

standardized by the ITU-T [3, 6, 17].

Authentication centre (AuC): As the radio interface and mobile stations are particularly

vulnerable, a separate AuC has been defined to protect user identity and data transmission. The

AuC contains the algorithms for authentication as well as the keys for encryption and generates the

values needed for user authentication in the HLR. The AuC may be situated in a special protected

part of the HLR [3, 4, 17].

Equipment identity register (EIR): The EIR is a database for all IMEIs, i.e., it stores all device

identifications registered for this network. It has a blacklist of stolen (or locked) devices, contains a

list of valid IMEIs (white list), and a list of malfunctioning devices (gray list) [17, 44]. As MSs are

mobile, they can be easily stolen, and with a valid SIM anyone could use the stolen MS, so

theoretically an MS is useless as soon as the owner has reported a theft, but unfortunately, the

23

blacklists of different providers are not usually synchronized and the illegal use of a device in

another operator’s network is possible [3, 17, 44].

GSM Interfaces

The communication relationships between the GSM network components are formally described

by a number of standardized interfaces.

Figure 2.2: GSM interfaces [44]

The A interface between BSS and MSC is used for the transfer of data for BSS management, for

connection control and for mobility management.

Within the BSS, the Abis interface between BTS and BSC and the air interface Um have been

defined.

The B interface is used by an MSC which needs to obtain data about an MS staying in its

administrative area, to request the data from the VLR responsible for this area. Conversely, the

MSC forwards to this VLR any data generated at location updates by MSs [4, 6, 44].

24

If the subscriber re-configures special service features or activates supplementary services, the

VLR is also informed first, which then updates the HLR. This updating of the HLR occurs through

the D interface. The D interface is used for the exchange of location-dependent subscriber data and

for subscriber management. The VLR informs the HLR about the current location of the mobile

subscriber and reports the current MSRN [4, 6]. The HLR transfers all of the subscriber data to the

VLR that is needed to give the subscriber their usual customized service access. The HLR is also

responsible for giving a cancellation request for the subscriber data to the old VLR once the

acknowledgement for the location update arrives from the new VLR. If, during location updating,

the new VLR needs data from the old VLR, it is directly requested over the G interface.

Furthermore, the identity of subscriber or equipment can be verified during a location update; for

requesting and checking the equipment identity, the MSC has an interface F to the EIR [6, 17, 44].

2.2.5 GSM Protocol Architecture

Figure 2.3: Protocol architecture for signaling in GSM [17]

25

The figure above shows the protocol architecture of GSM network with signaling protocols,

interfaces, as well as the entities of the physical architecture, but the main interest lies in the Um

interface, as the other interfaces occur between entities in a fixed network.

Layer 1, the physical layer: This handles all radio-specific functions. This includes the creation

of bursts according to the required formats, multiplexing of bursts into a TDMA frame,

synchronization with the BTS, detection of idle channels, and measurement of the channel quality

on the downlink [6, 7, 17]. The physical layer at Um uses Gaussian Minimun Shift Keying

(GMSK) for digital modulation and performs encryption/decryption of data, which means

encryption is not performed end-to-end, but only between MS and BSS over the air interface.

Synchronization also includes the correction of the individual path delay between an MS and the

BTS, as all MSs within a cell use the same BTS and thus must be synchronized to this BTS, so the

BTS generates the time-structure of frames and slots, but a problematic aspect in this context is the

different round trip times (RTT) [7, 17]. This occurs between an MS close to the BTS which has a

very short RTT, whereas an MS about 35km away from the BTS already exhibits about 40% of the

total RTT available for each slot, this will result in large guard spaces, therefore the BTS sends the

current RTT to the MS, which then adjusts its access time so that all bursts reach the BTS within

their limits [6, 7, 17].

The main tasks of the physical layer comprise channel coding and error detection/correction,

which is directly combined with the coding mechanisms. Channel coding makes extensive use of

different forward error correction (FEC) schemes, which adds redundancy to user data, allowing

for the detection and correction of selected errors, and the power of an FEC scheme depends on the

26

amount of redundancy, coding algorithm and further interleaving of data to minimize the effects of

burst errors [4, 7]. The FEC is also the reason why error detection and correction occurs in layer

one and not in layer two as in the ISO/OSI reference model, the GSM physical layer tries to correct

errors, but it does not deliver erroneous data to the higher layer [7, 17].

Layer 2, LAPDm: The link access procedure D-channel modified (LAPDm) protocol has been

defined at the Um interface of layer two, it offers reliable data transfer over connections, re-

sequencing of data frames, and flow control. As there is no buffering between layer one and two,

LAPDm has to obey the frame structures and recurrence patterns defined for the Um interface,

further services provided by include segmentation and reassembly of data and

acknowledged/unacknowledged data transfer which are basic signaling messages [3, 6, 7, 17].

Layer three, network layer: Comprise three sub-layers; that is the radio resource management

(RR), Mobility management (MM), Call management (CM) and only a part of this layer is

implemented in the BTS, the remainder is situated in the BSC, which supports its functions via the

BTS management (BTSM) [6, 7, 8, 17].

Radio Resource Management (RR): The main tasks of RR are setup, maintenance, and release of

radio channels and dedicated connections; it also directly accesses the physical layer for radio

information and offers a reliable connection to the next higher layer and the following functions:

- Channel allocation

- Handover

- Timing advance

- Power control

- Frequency hopping

27

Mobility management (MM): This contains functions for registration, authentication,

identification, location updating, and the provision of a temporary mobile subscriber identity

(TMSI) that replaces the international mobile subscriber identity (IMSI) and which hides the real

identity of an MS user over the air interface. While the IMSI identifies a user, the TMSI is valid

only in the current location area of a VLR. MM offers a reliable connection to the next higher layer

[7, 17].

Call management (CM): This layer contains three entities; call control (CC), short message service

(SMS), and supplementary service (SS). SMS allows for message transfer using the control

channel certain logical channels, while SS offers user identification, call redirection, or forwarding

of ongoing calls, features such as closed user groups and multiparty communication may also be

available [6, 8, 17, 44]. Closed user groups are of special interest to companies because they allow,

for example, a company specific GSM sub-network, to which only members of the group have

access. CC provides a point-to-point connection between two terminals and is used by higher

layers for call establishment, call clearing and change of call parameters [8]. This layer also

provides functions to send in-band tone, called dual tone multiple frequencies (DTMF), over the

GSM network which are used for the remote control of answering machines or the entry of PINs in

electronic banking and are, also used for dialing in traditional analog telephone systems. These

tones cannot be sent directly over the voice codec of a GSM MS, as the codec would distort the

tones, they are transferred as signals and then converted into tones in the fixed network part of the

GSM system [6, 8, 17].

Signaling system No. 7 (SS7) is used for signaling between an MSC and a BSC. This protocol also

transfers all management information between MSCs, HLR, VLRs, AuC, EIR, and OMC. An

MSC can also control a BSS via a BSS application part (BSSAP) [4, 17].

28

2.3 High Speed Circuit Switched Data

The first phase of GSM specifications provided only basic transmission capabilities for the support

of data services, with the maximum data rate in these early networks being limited to 9.6 kbps on

one timeslot [3, 7]. HSCSD was the first improvement of 2+G that clearly increased the achievable

data rates in the GSM system; the maximum radio interface bit rate of an HSCSD configuration

with 14.4-kbps channel coding is 115.2 kbps, which is up to eight times the bit rate on the single-

slot full-rate traffic channel (TCH/F) [3, 7]. Practically, the maximum data rate is limited to 64

kbps owing to core network and A-interface limitations. The main benefit of the HSCSD feature

compared to other data enhancements introduced later is that it is an inexpensive way to implement

higher data rates in GSM networks owing to relatively small incremental modifications needed for

the network equipment. Terminals, however, need to be upgraded to support multi-slot capabilities

[7, 17]. The basic HSCSD terminals with relatively simple implementation can receive up to four

and transmit up to two timeslots and thus support data rates above 50 kbps.

Two types of HSCSD configurations exist at the radio interface which include symmetric and

asymmetric, for both types of configurations, the channels may be allocated on either consecutive

or non-consecutive timeslots, taking into account the restrictions defined by the mobile station’s

multi-slot classes [4, 6, 7]. A symmetric HSCSD configuration consists of a co-allocated bi-

directional TCH/F channel while an asymmetric HSCSD configuration consists of a co-allocated

unidirectional or bi-directional TCH/F channel. A bi-directional channel is a channel on which the

data are transferred in both uplink and downlink directions. On unidirectional channels for

HSCSD, the data is transferred in downlink direction only [7]. The same frequency-hopping

sequence and training sequence is used for all the channels in the HSCSD configuration. In

29

symmetric HSCSD configuration, individual signal level and quality reporting for each HSCSD

channel is applied. For an asymmetric HSCSD configuration, individual signal level and quality

reporting is used for those channels [6, 7, 17].

The quality measurements reported on the main channel are based on the worst quality measured

among the main and the unidirectional downlink timeslots used. In both symmetric and

asymmetric HSCSD configuration, the neighboring cell measurement reports are copied on every

uplink channel used. For n channels, HSCSD requires n times signaling during handover,

connection setup and release, and each channel is treated separately [7]. The probability of

blocking or service degradation increases during handover, as in this case a BSC has to check

resources for n channels, not just one. All in all, HSCSD has been an attractive interim solution for

higher bandwidth and rather constant traffic, example, file download. However, it does not make

much sense for bursty internet traffic as long as a user is charged for each channel allocated for

communication [3, 6, 7]. HSCSD exhibits some major disadvantages because it still uses the

connection-oriented mechanisms of GSM, these are not at all efficient for computer data traffic,

which is typically bursty and asymmetrical; while downloading a larger file may require all

channels reserved, typical web browsing would leave the channels idle most of the time, and

allocating channels is reflected directly in the service costs, as, once the channels have been

reserved, other users cannot use them [7, 17].

2.4 Packet Digital Cellular Systems 2.5G

The circuit-switched bearer services were not particularly well suited for certain types of

applications with a bursty nature because circuit-switched connection has a long access time to the

network, and the call charging is based on the connection time [4, 7]. In packet-switched networks,

30

the connections do not reserve resources permanently, but make use of the on demand allocation,

which is highly efficient, particularly for applications with a bursty nature. Therefore there was

need for an upgrade to a more flexible and powerful data transmission that avoids the problem of

HSCSD, which led to the introduction of general packet radio service (GPRS) standard and

wireless application protocol (WAP) [1, 7]. Wireless Application Protocol (WAP) defines how

Web pages and similar data can be passed over limited bandwidth wireless channels to small

screens being built into new mobile telephones. GPRS defines how to add IP support to the

existing GSM infrastructure as well as provides both a means to aggregate radio channels for

higher data bandwidth and the additional servers required to off-load packet traffic from existing

GSM circuits [1, 7].

The general packet radio service (GPRS) provides packet mode transfer for applications that

exhibit traffic patterns such as frequent transmission of small volumes example, web request or

infrequent transmissions of small or medium volumes like, typical web responses according to the

requirement specification [6, 7, 17]. Compared to existing data transfer services, GPRS uses the

existing network resources more efficiently for packet mode applications, and provides a selection

of QoS parameters for the service requesters; it also allows for broadcast, multicast, and unicast

services. The overall goal in this context is the provision of a more efficient and, thus, cheaper

packet transfer service for internet applications that usually rely solely on packet transfer [7, 17].

Network providers support this model by charging on volume and not on connection time as is

usual for traditional GSM data services and for HSCSD. The main benefit for users of GPRS is the

‘always on’ characteristic – no connection has to be set up prior to data transfer, clearly, GPRS

was driven by the tremendous success of the packet-oriented internet, and by the new traffic

models and applications [1, 7]. For the new GPRS radio channels, the GSM system can allocate

31

between one and eight time slots within a TDMA frame, each time slots are not allocated in a

fixed, pre-determined manner but on demand. All time slots can be shared by the active users; up-

and downlink are allocated separately and also allocation of the slots is based on current load and

operator preferences [7, 17].

Users of GPRS can specify a QoS-profile which determines the service precedence (high, normal,

low), reliability class and delay class of the transmission, and user data throughput, so it adaptively

allocates radio resources to fulfill these user specifications.

2.4.1 GPRS architecture

The GPRS architecture introduces two new network elements, which are called GPRS support

nodes (GSN) and are in fact routers. All GSNs are integrated into the standard GSM architecture,

and many new interfaces have been defined [3, 6, 17].

The gateway GPRS support node (GGSN): This is the interworking unit between the GPRS

network and external packet data networks (PDN). This node contains routing information for

GPRS users, performs address conversion, and tunnels data to a user via encapsulation. The GGSN

is connected to external networks IP or X.25 via the Gi interface and transfers packets to the

serving GSN via an IP-based GPRS backbone network Gn interface [6, 7, 17].

Figure 2.4

Serving GPRS Support Node (SGSN):

(SGSN) which supports the MS via the Gb interface. The SGSN provides a number of functions

within the UMTS network architecture

• Mobility management: When a UE attaches to the Packet Switched domain of the UMTS

Core Network, the SGSN generates MM information based on the mobile's current

location.

• Session management: The SGSN manages the data sessions providing the required quality

of service and also managing what are termed the PDP (Packet data Protocol) contexts, i.e.

the pipes over which the data is sent.

• Interaction with other areas of the network:

within the network only by communicating with other

other circuit switched areas.

• Billing: The SGSN is also responsible billing. It achieves this by monitoring the flow of

user data across the GPRS network. CDRs (Call Detail Records) are generated by the

2.4: GPRS architecture reference model [17]

Serving GPRS Support Node (SGSN): The other new element is the serving GPRS support node

(SGSN) which supports the MS via the Gb interface. The SGSN provides a number of functions

within the UMTS network architecture [6, 7, 17].

When a UE attaches to the Packet Switched domain of the UMTS

Core Network, the SGSN generates MM information based on the mobile's current

The SGSN manages the data sessions providing the required quality

d also managing what are termed the PDP (Packet data Protocol) contexts, i.e.

the pipes over which the data is sent.

Interaction with other areas of the network: The SGSN is able to manage its elements

within the network only by communicating with other areas of the network, e.g. MSC and

other circuit switched areas.

The SGSN is also responsible billing. It achieves this by monitoring the flow of

user data across the GPRS network. CDRs (Call Detail Records) are generated by the

32

The other new element is the serving GPRS support node

(SGSN) which supports the MS via the Gb interface. The SGSN provides a number of functions

When a UE attaches to the Packet Switched domain of the UMTS

Core Network, the SGSN generates MM information based on the mobile's current

The SGSN manages the data sessions providing the required quality

d also managing what are termed the PDP (Packet data Protocol) contexts, i.e.

The SGSN is able to manage its elements

areas of the network, e.g. MSC and

The SGSN is also responsible billing. It achieves this by monitoring the flow of

user data across the GPRS network. CDRs (Call Detail Records) are generated by the

33

SGSN before being transferred to the charging entities (Charging Gateway Function, CGF)

[6, 7, 17].

The SGSN is connected to a BSC via frame relay and is basically on the same hierarchy level as an

MSC. The GR, which is typically a part of the HLR, stores all GPRS-relevant data. GGSNs and

SGSNs can be compared with home and foreign agents, respectively, in a mobile IP network.

Packet data is transmitted from a PDN, via the GGSN and SGSN directly to the BSS and finally to

the MS, before sending any data over the GPRS network, an MS must attach to it, following the

procedures of the mobility management. The attachment procedure includes assigning a temporal

identifier, called a temporary logical link identity (TLLI), and a ciphering key sequence number

(CKSN) for data encryption [7, 8, 17]. For each MS, a GPRS context is set up and stored in the

MS and in the corresponding SGSN, this context comprises the status of the MS (which can be

ready, idle, or standby; the CKSN, a flag indicating if compression is used, and routing data which

includes theTLLI, the routing area RA, a cell identifier, and a packet data channel, PDCH,

identifier. Besides attaching and detaching, mobility management also comprises functions for

authentication, location management, and ciphering which lies between MS and SGSN [8, 17]. In

idle mode an MS is not reachable and all contexts are deleted, while in the standby state only

movement across routing areas is updated to the SGSN but not changes of the cell because

permanent updating would waste battery power and no updating would require system-wide

paging. The update procedure in standby mode is a compromise. Only in the ready state every

movement of the MS is indicated to the SGSN [7, 8, 17].

2.4.2 GPRS Protocol Architecture

The protocol architecture of GPRS introduces new protocols on the transmission plane which were

not available in the protocol architecture of the GSM network. All data within the GPRS backbone

that is between the GSNs is transferred using the GPRS

use two different transport protocols, either the reliable transmission control protocol (TCP)

needed for reliable transfer of X.25 packets or the non

for IP packets [7, 8]. The network protocol for the GPRS back

To adapt to the different characteristics of the underlying networks, the sub

convergence protocol (SNDCP) is used between an SGSN and the MS. On top of SNDCP and

GTP, user packet data is tunneled from t

reliability of packet transfer between SGSN and MS, a special

which comprises address request (

Figure 2.5: GPRS tr

A base station subsystem GPRS protocol (BSSGP) is used to convey routing and QoS

information between the BSS and SGSN; BSSGP does not perform error correction and works on

top of a frame relay (FR) network

transfer data over the Um interface; the radio link protocol (RLC) provides a reliable link, while

is transferred using the GPRS tunneling protocol (GTP)

use two different transport protocols, either the reliable transmission control protocol (TCP)

needed for reliable transfer of X.25 packets or the non-reliable user datagram protocol (UDP) used

. The network protocol for the GPRS backbone is IP using any lower layers.

To adapt to the different characteristics of the underlying networks, the sub-network dependent

convergence protocol (SNDCP) is used between an SGSN and the MS. On top of SNDCP and

GTP, user packet data is tunneled from the MS to the GGSN and vice versa. To achieve a high

reliability of packet transfer between SGSN and MS, a special logical link control

address request (ARQ) and FEC mechanisms for PTP services

GPRS transmission plane protocol reference model [

A base station subsystem GPRS protocol (BSSGP) is used to convey routing and QoS

information between the BSS and SGSN; BSSGP does not perform error correction and works on

twork [3,6, 7, 17]. Radio link dependent protocols are needed to

transfer data over the Um interface; the radio link protocol (RLC) provides a reliable link, while

34

protocol (GTP) [17]. GTP can

use two different transport protocols, either the reliable transmission control protocol (TCP)

reliable user datagram protocol (UDP) used

bone is IP using any lower layers.

network dependent

convergence protocol (SNDCP) is used between an SGSN and the MS. On top of SNDCP and

he MS to the GGSN and vice versa. To achieve a high

logical link control (LLC) is used,

[8, 17].

ansmission plane protocol reference model [17]

A base station subsystem GPRS protocol (BSSGP) is used to convey routing and QoS-related

information between the BSS and SGSN; BSSGP does not perform error correction and works on

. Radio link dependent protocols are needed to

transfer data over the Um interface; the radio link protocol (RLC) provides a reliable link, while

35

the MAC controls access with signaling procedures for the radio channel and the mapping of LLC

frames onto the GSM physical channels [3, 6, 8]. The radio interface at Um needed for GPRS does

not require fundamental changes compared to standard GSM, however, several new logical

channels and their mapping onto physical resources have been defined, for example, one MS can

be allocated up to eight packet data traffic channels (PDTCHs) [17]. Capacity can be allocated on

demand and shared between circuit-switched channels and GPRS, and is done dynamically with

load supervision or alternatively, capacity can be pre-allocated. A very important factor for any

application working end-to-end is that it does not notice any details from the GSM/GPRS-related

infrastructure, the application uses TCP on top of IP, and IP packets are tunneled to the GGSN,

which forwards them into the PDN. All PDNs forward their packets for a GPRS user to the GGSN,

the GGSN asks the current SGSN for tunnel parameters, and forwards the packets via SGSN to the

MS [3, 6, 7, 8, 17]. All MSs are assigned private IP addresses which are then translated into global

addresses at the GGSN; the advantage of this approach is the inherent protection of MSs from

attacks where the subscriber typically has to pay for traffic even if it originates from an attack [7,

17].

2.5 Enhanced data rates for GSM evolution (EDGE)

Enhanced data rates for GSM evolution (EDGE) is a major enhancement to the GSM data rates.

GSM networks have already offered advanced data services, like circuit-switched 9.6-kbpsdata

service and SMS, for some time [7, 6]. High-speed circuit-switched data(HSCSD), with multi-slot

capability and the simultaneous introduction of 14.4-kbps per timeslot data, and GPRS are both

major improvements, increasing the available data rates from 9.6 kbps up to 64 kbps (HSCSD) and

160 kbps (GPRS). EDGE is specified in a way that will enhance the throughput per timeslot for

36

both HSCSD and GPRS [3, 7]. The enhancement of HSCSD is called ECSD (enhanced circuit

switched data), whereas the enhancement of GPRS is called EGPRS (enhanced general packet

radio service). In ECSD, the maximum data rate will not increase from 64 kbps because of the

restrictions in the A-interface, but the data rate per timeslot will triple. Similarly, in EGPRS, the

data rate per timeslot will triple and the peak throughput, with all eight timeslots in the radio

interface, will reach 473 kbps [7, 17].

The enhancement behind tripling the data rates is the introduction of the 8-PSK (octagonal phase

shift keying) modulation in addition to the existing Gaussian minimum shift keying. An 8-PSK

signal is able to carry 3 bits per modulated symbol over the radio path, while a GMSK signal

carries only 1 bit per symbol [6, 7, 17]. The carrier symbol rate of standard GSM is kept the same

for 8-PSK, and the same pulse shape as used in GMSK is applied to 8-PSK. The increase in data

throughput does not come for free, the price being paid in the decreased sensitivity of the 8-PSK

signal. This affects the radio network planning, and the highest data rates can only be provided

with limited coverage. The GMSK spectrum mask was the starting point for the spectrum mask of

the 8-PSK signal, but along the standardization process, the 8-PSK spectrum mask was relaxed

with a few dB in the 400 kHz offset from the centre frequency. This was found to be a good

compromise between the linearity requirements of the 8-PSK signal and the overall radio network

performance [3, 7, 17].

2.6 Third Generation Cellular Network (3G)

3G mobile communications systems arose as a response to the challenge of developing systems

that increased the capacity of the existing 2G systems [4, 5, 10]. This required that the

infrastructure be designed so that it can evolve as technology changes, without compromising the

37

existing services on the existing networks. Separation of access technology, transport technology,

service technology and user application from each other make this demanding requirement

possible [4, 5, 15].

The decision to base 3G specifications on GSM was motivated by widespread deployment of

networks based on GSM standards, the need to preserve some backward compatibility, and the

desire to utilize the large investments made in the GSM networks [10, 14, 15], as a result, despite

its many added capabilities, the 3G core network bears significant resemblance to the GSM

network. 3G is designed to raise the data rate to 2 megabits per second (2 Mbps) – a much higher

rate than 2G and 2.5G, specifically, 3G systems offer between 144 Kbps to 384 Kbps for high-

mobility and high coverage, and 2 Mbps for low-mobility and low coverage applications [1, 3, 10,

15]. In other words, 3G systems mandate data rates of 144 Kbps at driving speeds, 384 Kbps for

outside stationary use or walking speeds, and 2 Mbps indoors which supports wireless web-based

access, E-mail, video teleconferencing and multimedia services consisting of mixed voice and data

streams, and high speed internet access over very wide geographical areas, the frequency allocated

to 3G networks are 1885-2025 MHz for the first band and 2110-2200 MHz for the second band [1,

2, 15]. However, the indoor rate of 2 Mbps from 3G competes with high-speed 802.11 wireless

LANs that offer data rates of 11 to 54 Mbps [4, 15].

The best known example of 3G is the Universal Mobile Telecommunications System (UMTS) – an

acronym used to describe a 3G system that originated in Europe with the overall idea that its users

will be able to use 3G technologies all over the world under different banners. This roaming ability

to use devices on different networks is made possible by satellite and land based networks [2, 10,

15]. UMTS provides a consistent service environment even when roaming via “Virtual Home

Environment” (VHE), a person roaming from his network to other UMTS operators experiences a

38

consistent set of services, independent of the location or access mode. 3G networks use a

connectionless packet-switched communications mechanism where data is split into packets to

which an address uniquely identifying the destination is appended [14, 15]. This mode of

transmission, in which communication is broken into packets, allows the same data path to be

shared among many users in the network, by breaking data into smaller packets that travel in

parallel on different channels, the data rate can be increased significantly [13, 15].

2.6.1 UMTS Radio Interface

The major objectives and requirements of the UMTS network which includes support of general

quality of service, support of multimedia services and support of 2 Mb/s, made the reuse of the

existing technology of the 2G network in the context of 3G networks very difficult [8, 12]. Hence

the need for the development of an entirely new radio interface for the UMTS networks which led

to the proposal and adoption of wideband code division multiple access (WCDMA) by the third

generation partnership project (3GPP) [4, 8, 12]. WCDMA is a wideband direct sequence code

multiple access (DS-CDMA) technology, proposed as the multiple access technology in the FDD

mode of the UMTS terrestrial radio access network (UTRAN) system. In comparison with the

general DS-CDMA systems that have been deployed in the second generation systems, WCDMA

is characterized by a wide bandwidth of 5 MHz and a constant high chip rate of 3.84 Mcps and the

modulation scheme adopted is the quadrature phase shift keying (QPSK) modulation [8, 21]. The

wideband frequency is chosen because it can provide a high data rate required for 3G networks in

good conditions as well as it provides better handoff mechanisms, such as soft handoff for circuit-

switched bearer channels, while the wide bandwidth of the spread spectrum system resolves more

multipath problems and thus improves the system performance [2, 8, 12, 21].

39

2.6.2 UMTS Architecture

The basic structure of the UMTS system is split into three main components: the core network

(CN); the UMTS terrestrial radio access network (UTRAN); and the user equipment (UE), which

is further separated into the access stratum (AS) and non access stratum (NAS). The access stratum

carries all of the signaling and user data messages that relate to the access technology used across a

specific interface in that part of the system [8, 12, 21]. Across the radio interface, the access

stratum protocols are the lower level protocols between the UE and the UTRAN, and between the

UTRAN and the CN, examples of the types of signaling messages that are carried via the access

stratum are messages that control the power control loops in the system, that control the handover

procedures or that allocate channels to a user for use, for instance in a speech call [4, 8, 12]. The

non access stratum carries the signaling messages and user data messages that are independent of

the underlying access mechanism, these signaling and user data are passed between the UE and the

CN, an example of an NAS signaling message is one associated with a call setup request and

management functions, where the call setup messages are independent of the underlying access

mechanism [8, 12, 21].

Figure

The UE consists of two logical entities: the Mobile Equipment (ME) is the actu

used for radio communication over the Uu interface, whereas the UMTS Subscriber Identity

Module (USIM) is a smartcard that contains subscriber identity information and performs

authentication algorithms. The interface between the USIM and

The UTRAN consists of two logical entities and interfaces between them

stations, which are also called Node Bs, convert the data stream from the Uu interface to th

interface. In the first release of

task of the Node B was the inner loop power control, but some of the latest features of WCDMA

have introduced several new functionalities for the Node B to handle. These include for example

packet scheduling, resource allocation, congestion control and retransmission handling in some

cases [8, 17, 21]. The Radio Network Controller (RNC) is a network element that owns and

controls radio resources in its domain, i.e. the

main elements are Home Location Register (HLR), Mobile

Location Register (MSC/VLR), Gateway MSC (GMSC), Serving GPRS Support Node (SGSN)

and Gateway GPRS Support Node

2.6: The UMTS physical architecture [8]

The UE consists of two logical entities: the Mobile Equipment (ME) is the actu

used for radio communication over the Uu interface, whereas the UMTS Subscriber Identity

Module (USIM) is a smartcard that contains subscriber identity information and performs

authentication algorithms. The interface between the USIM and ME is called the Cu interface.

The UTRAN consists of two logical entities and interfaces between them [12

which are also called Node Bs, convert the data stream from the Uu interface to th

interface. In the first release of the UMTS standard, the only radio resource management related

task of the Node B was the inner loop power control, but some of the latest features of WCDMA

have introduced several new functionalities for the Node B to handle. These include for example

et scheduling, resource allocation, congestion control and retransmission handling in some

. The Radio Network Controller (RNC) is a network element that owns and

resources in its domain, i.e. the base stations connected to it. In the core network, the

Location Register (HLR), Mobile Services Switching Centre / Visitor

Location Register (MSC/VLR), Gateway MSC (GMSC), Serving GPRS Support Node (SGSN)

and Gateway GPRS Support Node (GGSN) [8, 12, 21].

40

The UE consists of two logical entities: the Mobile Equipment (ME) is the actual radio terminal

used for radio communication over the Uu interface, whereas the UMTS Subscriber Identity

Module (USIM) is a smartcard that contains subscriber identity information and performs

ME is called the Cu interface.

[12, 21]. The base

which are also called Node Bs, convert the data stream from the Uu interface to the Iub

the only radio resource management related

task of the Node B was the inner loop power control, but some of the latest features of WCDMA

have introduced several new functionalities for the Node B to handle. These include for example

et scheduling, resource allocation, congestion control and retransmission handling in some

. The Radio Network Controller (RNC) is a network element that owns and

In the core network, the

Services Switching Centre / Visitor

Location Register (MSC/VLR), Gateway MSC (GMSC), Serving GPRS Support Node (SGSN)

UMTS Network Domain

Fig

User Equipment Domain

The User Equipment domain consists of the terminal that allows the user access to the mobile

services through the radio interface, which is further split into two sub domain;

Mobile Equipment (ME) domain:

is sub-divided into the Mobile Termination (MT) entity, which performs the radio transmission

and reception, and the Terminal Equipment (TE), which contains the ap

two entities may be physically located at the same hardware device depending

application, for example, in the case of a handset used for a speech application, both MT and TE

are usually located in the handset, whil

application, the handset will contain the MT and

contains the web browser [11, 17, 21

Universal Subscriber Identity Module (USIM) domain:

containing the USIM is a removable smart card. The USIM contains the identi

Figure 2.7: UMTS network domain [17]

The User Equipment domain consists of the terminal that allows the user access to the mobile

ces through the radio interface, which is further split into two sub domain;

bile Equipment (ME) domain: This represents the physical entity being a handset

divided into the Mobile Termination (MT) entity, which performs the radio transmission

and reception, and the Terminal Equipment (TE), which contains the applications

two entities may be physically located at the same hardware device depending

or example, in the case of a handset used for a speech application, both MT and TE

are usually located in the handset, while if the same handset is being used for a web browsing

application, the handset will contain the MT and the TE can reside in an external device

, 17, 21].

r Identity Module (USIM) domain: The physical har

containing the USIM is a removable smart card. The USIM contains the identi

41

The User Equipment domain consists of the terminal that allows the user access to the mobile

sents the physical entity being a handset that in turn

divided into the Mobile Termination (MT) entity, which performs the radio transmission

plications [11, 17]. These

two entities may be physically located at the same hardware device depending on the specific

or example, in the case of a handset used for a speech application, both MT and TE

e if the same handset is being used for a web browsing

the TE can reside in an external device that

he physical hardware device

containing the USIM is a removable smart card. The USIM contains the identification of the

42

profile of a given user, including his identity in the network as well as information about the

services that this user is allowed to access depending on the contractual relationship with the

mobile network operator. So, the USIM is specific for each user and allows him to access the

contracted services in a secure way by means of authentication and encryption procedures

regardless of the ME that is used [8, 11, 12 , 21]. The USIM card contains all the data relating to

the subscriber, including the following:

• The International Mobile Station Identifier (IMSI);

• The Mobile Station International ISDNN umber (MSISDN);

• The preferred language, used for broadcast information and for terminal menu options;

• The encryption and integrity keys for the circuit switched and packet switched domains.

• The list of forbidden networks;

• The user's temporary identities vis-a-vis the circuit switched and packet switched domains;

• The identities of the current location area and routing area of the mobile for the circuit

switched and packet switched domains respectively [2, 12, 17].

Infrastructure Domain

The infrastructure domain in the UMTS architecture contains the physical nodes that terminate the

radio interface allowing the provision of the end-to-end service to the UE. In order to separate the

functionalities that are dependent on the radio access technology being used from those that are

independent, the infrastructure domain is in turn split into two domains, namely the Access

Network and the Core Network domains separated by the Iu reference point [11, 17]. This allows

there to be a generic UMTS architecture that enables the combination of different approaches for

the radio access technology as well as different approaches for the core network. With respect to

the core network, and in order to take into account different scenarios in which the user

43

communicates with users in other types of networks, that is, other mobile networks, fixed

networks, Internet, etc., three different sub-domains are defined:

Home Network domain: This corresponds to the network to which the user is subscribed, so it

belongs to the operator that has the contractual relationship with the user. The user service profile

as well as the user secure identification parameters are kept in the home network and should be

coordinated with those included in the USIM at the UE [11, 17].

Serving Network domain: This represents the network containing the access network to which

the user is connected in a given moment and it is responsible for transporting the user data from

the source to the destination [11, 12, 17]. Physically, it can be either the same home network or a

different network in the case where the user is roaming with another network operator. The serving

network is then connected to the access network through the Iu reference point and to the home

network through the Zu reference point. The interconnection with the home network is necessary

in order to retrieve specific information about the user service abilities and for billing purposes [11,

17].

Transit Network domain: This is the core network part located on the communication path,

between the serving network and the remote party, and it is connected to the serving network

through the Yu reference point [4, 11]. Where the remote party belongs to the same network to

which the user is connected, the serving network and the transit network are physically the same

network, and sin general, the transit network may not be a UMTS network, for example, in the

case of a connection with a fixed network or when accessing the Internet [11, 17].

44

2.6.3 Universal Terrestrial Radio Access Network (UTRAN)

The UTRAN is composed of Radio Network Subsystems (RNSs) that are connected to the Core

Network through the Iu interface that coincides with the Iu reference point of the overall UMTS

architecture. Each RNS is responsible for the transmission and reception over a set of UMTS cells

where the connection between the RNS and the UE is done through the Uu or radio interface [8,

11, 12, 17]. The RNSs comprises a number of Nodes B and one Radio Network Controller (RNC),

connected through Iub interfaces and RNCs belonging to different RNSs are interconnected by

means of the Iur interface.

Main Requirements for UTRAN:

• The major impact on the design of UTRAN has been the requirement to support soft

handover one terminal connected to the network via two or more active cells and the

WCDMA-specific Radio Resource Management algorithms.

• The maximization of the commonalities in the handling of packet-switched and circuit-

switched data, with a unique air interface protocol stack and with the use of the same

interface for the connection from UTRAN to both the PS and CS domains of the core

network.

• The maximization of the commonalities with GSM networks, when possible.

• Use of the ATM transport as the main transport mechanism in UTRAN [7, 17, 21]

45

Figure 2.8: UTRAN architecture [11]

UTRAN Frequency Division Duplex mode: In this mode, the uplink and downlink transmit with

different carrier frequencies, thus requiring the allocation of paired bands. The access technique

being used is WCDMA, which means that several transmissions in the same frequency and time

are supported and can be distinguished by using different code sequences [4, 7, 11].

UTRAN Time Division Duplex mode: The uplink and downlink operate with the same carrier

frequency in this mode but in different time instants, thus they are able to use unpaired bands. The

access technique being used is a combination of TDMA and DS-CDMA, which means that

simultaneous transmissions are distinguished by different code sequences (DS-CDMA component)

and that a frame structure is defined to allocate different transmission instants (time slots) to the

different users (TDMA component) [11].

46

2.6.3.1 Node B

The UTRAN Node B is equivalent to the BTS in GSM networks. Its main role is to provide radio

reception and transmission for one or more of the UTRAN cells. The technical implementation and

the internal architecture of the Node B are left to the manufacturer and thus, one can conceive of

Node Bs made up of one or several cells, using omni-directional or sectorial antennae [2, 7, 11, 17,

22]. A node B is the termination point between the air interface and the network and it is composed

of one or several cells or sectors, a cell stands as the smallest radio network entity that has its own

identification number, denoted as Cell ID. Conceptually, a cell is regarded as a UTRAN Access

Point through which radio links with the UEs are established, from a functional point of view, the

cell executes the physical transmission and reception procedures over the radio interface [11, 17,

22]. Node B controls the data flow between the Uu and lub interfaces, it performs the air interface

Layer1 processing such as, channel coding and interleaving, rate adaptation, spreading, it extracts

the MAC protocol data units, and transports them across the lub interface to the RNC and also

participates in radio resource management operations such as the inner loop power control [12,

22].

2.6.3.2 The Radio Network Controller

The Radio Network Controller (RNC) is the network element responsible for the control of the

radio resources of UTRAN where the UMTS Radio Resource Management (RRM) algorithms are

executed. On the network side, the RNC interoperates with the core network through the Iu

interface and establishes, maintains and releases the connections with the core network elements

that the UEs under its control require in order to receive the UMTS services, it also terminates the

Radio Resource Control (RRC) protocol that defines the messages and procedures between the

mobile and UTRAN. It logically corresponds to the GSM BSC [11, 12, 22].

47

Functions of the RNC

● Call admission control: It is very important for WCDMA systems to keep the interference below

a certain level. The RNC calculates the traffic within each cell and decides, if additional

transmissions are acceptable or not.

● Congestion control: During packet-oriented data transmission, several stations share the

available radio resources. The RNC allocates bandwidth to each station in a cyclic fashion and

must consider the QoS requirements [12, 17].

● Encryption/decryption: The RNC encrypts all data arriving from the fixed network before

transmission over the wireless link and vice versa.

● ATM switching and multiplexing, protocol conversion: Typically, the connections between

RNCs, node Bs, and the core network are based on ATM. An RNC has to switch the connections

to multiplex different data streams.

● Radio resource control: The RNC controls all radio resources of the cells connected to it via a

node B. This task includes interference and load measurements. The priorities of different

connections have to be obeyed [11, 12, 17].

● Radio bearer setup and release: An RNC has to set-up, maintain, and release a logical data

connection to a UE (the so-called UMTS radio bearer).

● Code allocation: The WCDMA codes used by a UE are selected by the RNC. These codes may

vary during a transmission.

● Power control: The RNC only performs a relatively loose power control of the outer loop. This

means that the RNC influences transmission power based on interference values from other cells

or even other RNCs. But this is not the tight and fast power control performed 1,500 times per

48

second. This is carried out by a node B. This outer loop of power control helps to minimize

interference between neighboring cells or controls the size of a cell [11, 12, 17].

● Handover control and RNS relocation: Depending on the signal strengths received by UEs and

node Bs, an RNC can decide if another cell would be better suited for a certain connection. If the

RNC decides for handover it informs the new cell and the UE as explained in subsection 4.4.6. If a

UE moves further out of the range of one RNC, a new RNC responsible for the UE has to be

chosen. This is called RNS relocation.

● Management: The network operator needs a lot of information regarding the current load,

current traffic, error states etc. to manage its network. The RNC provides interfaces for this task as

well [12, 17].

Logical Role of the RNC

In case one mobile to UTRAN connection uses resources from more than one RNS the RNCs

involved have two separate logical roles but one RNC normally contains all functionality.

Controlling RNC: This is the role with respect to the Node B, the RNC controlling one Node B is

indicated as the Controlling RNC (CRNC) of the Node B which is responsible for the load and

congestion control of its own cells, and also executes the admission control and code allocation for

new radio links to be established in those cells [4, 11, 12].

Serving RNC: This role is taken with respect to the UE, the SRNC is the RNC that holds the

connection of a given UE with the CN through the Iu interface. It can be regarded as the RNC that

controls the RNS to which the mobile is connected at a given moment [8, 11, 12, 17]. When the

UE moves across the network and executes handover between the different cells, it may require a

SRNS that is, the RNS having the SRNC relocation procedure when the new cell belongs to a

different RNC. This procedure requires the communication between the SRNC and the new RNC

49

through the Iur interface in order for the new RNC to establish a new connection with the CN over

its Iu interface [8, 11, 17]. The SRNC also terminates the Radio Resource Control Signaling, that

is, the signaling protocol between the UE and UTRAN; it performs the Layer2 processing of the

data to and from the radio interface. Basic Radio Resource Management operations, such as the

mapping of Radio Access Bearer (RAB) parameters into air interface transport channel parameters,

the handover decision, and outer loop power control, are executed in the SRNC. The SRNC may

also but not always be the CRNC of some Node B used by the mobile for connection with UTRAN

[12, 22].

Drift RNC: This role is also taken with respect to the UE and is a consequence of a specific type

of handover that exists with WCDMA systems, denoted as soft handover. In this case, a UE can be

simultaneously connected to several cells, that is, having radio links with several cells, then, when

the UE moves in the border between RNSs, it is possible that it establishes new radio links with

cells belonging to a new RNC while at the same time keeping the radio link with some cells of the

SRNC [8, 11, 12, 22]. The new RNC takes the role of DRNC, and the connectivity with the core

network is not done through the Iu of the DRNC but still through the Iu of the SRNC, thus

requiring it to establish resources for the UE in the Iur interface between SRNC and DRNC. Only

when all the radio links of the old RNC are released and the UE is connected only to the new RNC,

will the SRNS relocation procedure is executed [11, 12,].

50

Figure 2.9: Logical role of RNC [2]

2.6.4 UMTS Core Network

While the UMTS radio interface, WCDMA, represent a bigger step in the radio access evolution

from GSM networks, the UMTS core network did not experience major changes, both UTRAN

and GPRS/EDGE (GERAN) based radio access network connect to the same core network [2, 8,

12]. UMTS core network has two domains: Circuit Switched (CS) domain and Packet Switched

(PS) domain, to cover the need for different traffic types, the division comes from the different

requirements for the data, depending on whether it is real time (circuit switched) or non-real time

(packet data). However, it should be understood that several functionalities can be implemented in

a single physical entity and all entities do not necessarily exist as separate physical units in real

network [2, 12].

The Core Network is the part of the mobile network infrastructure that covers all the functionalities

that are not directly related with the radio access technology, thus it is possible to combine

different core network architectures with different radio access networks. Examples of these

functionalities are the connection and session management, which includes establishment,

51

maintenance and release of the connections and sessions for circuit switched and packet switched

services, as well as mobility management which includes keeping track of the area where each UE

can be found in order to route calls to it [2, 8, 11, 12]. The initial implementation of UMTS was

seen simply as an extension of the GSM/GPRS networks because they maintained the existing core

network for GSM/GPRS with small modifications in order to make it compatible with the new

UMTS access network but is now trending to an all IP core network [8, 11].

Figure 2.10: UMTS core network architecture [2]

Circuit Switched domain: The circuit switched domain supports the traffic composed by

connections that require dedicated network resources, and allows the interconnection with external

CS networks like the Public Switched Telephone Network (PSTN) or the Integrated Services

Digital Network (ISDN), the Iu reference point between core and access networks in this interface

52

is denoted as Iu_CS [2, 11, 12]. The circuit switched domain is composed of three specific entities,

namely the MSC, the GMSC and the VLR. The MSC interacts with the radio access network by

means of the Iu_CS interface and executes the necessary operations to handle circuit switched

services. This includes routing the calls towards the corresponding transit network and establishing

the corresponding circuits in the path [2, 4, 11].

The MSC is the same as that which is used in the GSM network. The only difference being that a

specific interworking function (IWF) is required between the MSC and the access network in

UMTS. The reason is that in GSM the speech traffic delivered to the core network by the access

network uses 64kb/s circuits while in UMTS the speech uses adaptive multi-rate technique (AMR)

with bit rates between 4.75kb/s and 12.2kb/s. These are transported in the access network with

Asynchronous Transfer Mode (ATM) technology [2, 8, 11]. This is why the term 3G MSC is

sometimes used to differentiate between the MSC from GSM system and the MSC from UMTS

networks.

The VLR is a database associated with a MSC that contains specific information like identifier,

location information, etc. about the users that are currently in the area of this MSC. This allows the

performing of certain operations without the need to interact with the HLR. The information

contained in the VLR and the HLR must be coordinated [3, 7, 11]. The GMSC is a specific MSC

that interfaces with the external circuit switched networks and is responsible of routing calls to and

from the external network, to this end. It interacts with the HLR to determine the MSC through

which the call should be routed. In WCDMA, the communication between the entities of the

circuit switched domain is done by means of 64kb/s circuits and uses Signaling System No. 7

(SS7) for signaling purposes [2, 3, 8, 11].

53

Packet Switched domain: The PS domain supports a traffic composed of packets, which are

groups of bits that are autonomously transmitted and independently routed. No dedicated resources

are required throughout the connection time, since the resources are allocated on a packet basis

only when needed [2, 11, 22]. This allows a group of packet flow to share the network resources

based on traffic multiplexing and also allows the interconnection of external PS networks, like the

Internet. The Iu reference point between core and access networks in this interface is denoted as

Iu_PS. The PS domain is composed of two specific entities, namely the SGSN and GGSN, which

perform the necessary functions to handle packet transmission to and from the UEs. The SGSN is

the node that serves the UE and establishes a mobility management context including security and

mobility information. It interacts with the UTRAN by means of the Iu_PS interface. The GGSN, in

turn, interfaces with the external data networks and contains routing information of the attached

users. IP tunnels between the GGSN and the SGSN are used to transmit the data packets of the

different users [2, 8, 11, 12, 22].

2.6.5 UMTS Interfaces

Cu interface: This is the electrical interface between the USIM smartcard and the ME. The

interface follows a standard format for smartcards.

Uu interface: This is the WCDMA radio interface, through which the UE accesses the fixed part

of the system, and is therefore probably the most important open interface in UMTS.

Iu interface: This connects UTRAN to the core network, similarly to the corresponding interfaces

in GSM.

Iur interface: The open Iur interface allows soft handover between RNCs from different

manufacturers, and therefore complements the open Iu interface.

54

Iub interface: The Iub connects a Node B and an RNC [8, 12, 22].

2.6.6 UMTS Radio Interface Protocol Architecture

The protocol architecture of the UMTS network that exists across the Uu interface between the UE

and the radio access network is separated into a control plane on the left. It is responsible for the

transmission of the control signaling messages and the user plane on the right is responsible for the

transmission of the user data messages such as speech and packet data [4, 8, 11].

The radio interface is composed of Layers 1, 2 and 3. The lowest layer, Layer 1, is the physical

layer, which is based on WCDMA technology, Layer 2, is split into four sub-layers: Medium

Access Control (MAC), Radio Link Control (RLC), Packet Data Convergence Protocol (PDCP)

and Broadcast/Multicast Control (BMC). Furthermore, Layer 3 is divided into control plane and

user plane and, together with Layer 2 [8, 11, 12].

Figure 2.11: UMTS radio interface protocol architecture [12]

55

2.6.6.1 Layer 1

In the protocol model this layer takes care of the actual transmission of data across the radio path,

which is also the case in the well-known OSI (Open Systems Interconnection) reference model.

The physical layer is the lowest data transmission layer and it only provides the means of

transmitting raw bits over the physical data link. It also includes tasks like forward error correction

(channel coding), interleaving, error detection (CRC), closed loop power control and

synchronization [2, 8, 11, 12, 17 40].

The physical layer interfaces the MAC sub-layer of Layer 2 (the data link layer) and offers so

called transport channels (TrCH) as a service to MAC. A transport channel is characterized by how

the information is transferred over the radio interface; the MAC layer offers logical channels as a

service to the RLC sub-layer of Layer 2. A logical channel is characterized by the type of

information transferred [2, 12, 40]. It also interfaces the Radio Resource Control (RRC) layer of

Layer 3 (the network layer), which can be used for controlling the physical layer. The RRC

terminates in the UTRAN and this protocol contains all procedures to control, modify and release

Radio Bearers (RB), its messages use radio bearer services offered by Layer 2 for transport [2, 8,

12, 40].

Specific functions of this layer include; RF processing aspects, chip rate processing, symbol rate

processing and transport channel combination [8, 12, 40]. In the transmit direction, the physical

layer receives blocks of data from the higher layers. It transports blocks via transport channels

from the MAC layer and multiplexes them onto a physical channel. In the receive direction, the

physical layer receives the physical channels, extracts and processes the multiplexed data and

delivers it up to the MAC. Within the WCDMA system, the physical channels are constructed

using special codes referred to as channelization codes and scrambling codes [2, 8, 12, 40].

56

2.6.6.2 Layer 2

This Layer comprises the four protocols namely MAC protocol, RLC protocol, PDCP protocol and

BMC protocols which are discussed in detail below.

The Medium Access Control (MAC) Protocol

The MAC provides some important functions within the overall radio interface architecture. It is

responsible for the dynamic resource allocation under the control of the RRC layer. Part of the

resource allocation requires the MAC to use relative priorities between services to control the

access to the radio interface transmission resources [2, 8, 12]. These functions comprise the

mapping between the logical and the transport channels, the transport format selection and priority

handling of data flow. It is also responsible for UE identification management in order to facilitate

transactions such as random access attempts and the use of downlink common channels [2, 8, 12].

When the RLC is operating in transparent mode, that is when the data are passing through the RLC

layer without any header information and when ciphering is enabled, it is the function of the MAC

actually to perform the ciphering task.

The MAC is also responsible for traffic volume measurements across the radio interface. To

achieve this, it monitors the buffer levels for the different RLC instances that are delivering data to

it [4, 8, 11]. There are multiple channels entering the MAC referred to as logical channels and

there are multiple channels leaving the MAC referred to as transport channels. The number of

logical channels coming in and the number of transport channels leaving are not necessarily the

same, but it provides a multiplexing function that result in different logical channels being mapped

onto the same transport channels [2, 8, 11, 12].

57

The Radio Link Control (RLC) Protocol

The RLC provides a number of different types of transport service, the transparent, the

unacknowledged, or the acknowledged mode of data transfer. Each mode has a different set of

services that define the use of that mode to the higher layers [8, 11, 12, 17]. Services provided by

the RLC include segmentation and reassembly. This allows the RLC to segment large protocol

data unit (PDUs) into smaller PDUs. A concatenation service is also provided to allow a number of

PDUs to be concatenated. The acknowledged mode data transfer service provides a very reliable

mechanism for transferring data between two peer RLC entities [8, 11, 12]. It also provides flow

control and in-sequence delivery of PDUs. Error correction is provided by an automatic repeat

request (ARQ) system, where PDUs identified as being in error can be requested to be

retransmitted. Flow control is the procedure by which the transfer of PDUs across the radio

interface can be governed to prevent buffer overload. For instance at the receiving end, the ARQ

system that is used could result in PDUs arriving out of sequence, the sequence number in the

PDU can be used by the RLC to ensure that all PDUs arrive in the correct order [8, 11, 12, 40].

Packet Data Convergence Protocol (PDCP)

The PDCP layer is defined for use with the PS domain only, at the inputs to the PDCP layer are the

PDCP service access points (SAPs). The PDCP layer for WCDMA provides header compression

(HC) functions and support for lossless SRNS relocation. Lossless SRNS relocation is used when

the SRNC is being changed, and it is required that no data are lost; data that are not acknowledged

as being correctly received are retransmitted once the new SRNC is active [8, 11].

PDCP Functions

Compression of redundant protocol control information (TCP/IP and RTP/UDP/IP headers) at the

transmitting entity, and decompression at the receiving entity. The header compression method is

58

specific to the particular network layer, transport layer or upper layer protocol combinations, for

example TCP/IP and RTP/UDP/IP [8, 12, 22].

Transfer of user data: This means that the PDCP receives a PDCP SDU from the non-access

stratum and forwards it to the appropriate RLC entity and vice versa.

Support for lossless SRNS relocation: In practice this means that those PDCP entities which are

configured to support lossless SRNS relocation have PDU sequence numbers, which, together with

unconfirmed PDCP packets are forwarded to the new SRNC during relocation. Only applicable

when PDCP is using acknowledged mode RLC with in-sequence delivery [2, 8, 12, 22].

Broadcast and Multicast Control (BMC) Protocol

The BMC provides support for the cell broadcast SMS, its messages are received on a common

physical channel. The messages are periodic with a periodicity defined by parameters that are

broadcast to the UE in system information broadcast (SIB) messages. The UE is able to select and

filter the broadcast messages according to settings defined by the user [4, 8, 11].

Storage: The BMC in RNC stores the Cell Broadcast messages received over the cell broadcast

centre (CBC)–RNC interface for scheduled transmission.

Traffic volume monitoring and radio resource request for CBS: On the UTRAN side, the BMC

calculates the required transmission rate for the Cell Broadcast Service based on the messages

received over the CBC–RNC interface, and requests appropriate CTCH/ FACH resources from

RRC [8, 12, 17, 22].

Scheduling: BMC protocol receives scheduling information together with each Cell Broadcast

message over the CBC –RNC interface. Based on this scheduling information, on the UTRAN side

the BMC generates schedule messages and schedules BMC message sequences accordingly. On

the UE side, the BMC evaluates the schedule messages and indicates scheduling parameters to

59

RRC, which are used by RRC to configure the lower layers for CBS discontinuous reception.

Transmission of BMC messages to UE. This function transmits the BMC messages (Scheduling

and Cell Broadcast messages) according to the schedule. Delivery of Cell Broadcast messages to

the upper layer. This UE function delivers the received non-corrupted Cell Broadcast messages to

the upper layer [8, 11, 12, 22].

2.6.6.3 Layer 3

The Radio Resource Control (RRC) protocol

The RRC protocol resides in the layer3 of the architecture and is responsible for the establishment,

modification and release of radio connections between the UE and the UTRAN. The radio

connections are commonly referred to as the RRC connection, which is used to transfer RRC

signaling messages. It also provides transportation services for the higher layer protocols that use

the connections created by the protocol [8, 11, 22]. In the UE there is a single instance of the RRC

protocol while in the UTRAN there are multiple instances of the protocol, one per UE, this RRC

entity in the UE receives its configuration information from the RRC entity in the UTRAN. In

addition to establishing an RRC connection that is used by the various sources of signaling, the

RRC protocol is also responsible for the creation of user plane connections, referred to as radio

access bearers (RABs), these RABs are created to transport user plane information such as speech

or packet data across the radio interface from the UE to the core network [8, 11]. The RRC

protocol provides radio mobility functions including elements such as the control of soft-handover

to same-frequency UMTS cells, hard-handover to other UMTS cells and hard-handover to other

radio access technology (RAT) cells such as GSM. Cell updates and UTRAN registration area

(URA) updates are procedures that are used to allow the UTRAN to track the location of the UE

within the UTRAN [8, 11, 12, 17].

60

2.7 WCDMA Concepts

WCDMA is a wideband Direct-Sequence Code Division Multiple Access (DS-CDMA) system,

where user information bits are spread over a wide bandwidth by multiplying the user data with

quasi-random bits (called chips) derived from CDMA spreading codes. In order to support very

high bit rates up to 2 Mbps, the use of a variable spreading factor and multi-code connections is

supported [8, 12, 23]. The physical aspects of the WCDMA air interface, is characterized by the

flow of information at 3.84 Mega chips per second (Mcps) which is be divided into 10 ms radio

frames, each further divided into 15 slots of 2560 chips. The notion of chips is introduced instead

of the more typical bits, because chips are the basic information units in WCDMA, where bits

from the different channels are coded by representing each bit by a variable number of chips and

what each chip represents depends on the channel [8, 23]. The fundamental concept in WCDMA

includes channelization and scrambling, channel coding, power control, and handover.

WCDMA physical layer and air interface

When comparing different cellular systems with each other, the physical layer of the radio

interface typically contains most of the differences and is therefore the most interesting part of the

study. In the OSI reference model, the physical layer is the lowest layer and it includes the

transmission of signals and the activation and deactivation of physical connections. The physical

layer has a major impact on equipment complexity with respect to the required baseband

processing power in the terminal and in the base station equipment [8, 12, 23].

WCDMA technology also introduces new challenges to the implementation of the physical layer,

as third generation systems are wideband from the service point of view as well, the physical layer

needs to be designed to support various different services. More flexibility is also needed for future

service introduction [23, 41]

61

The physical layer offers data transport services to higher layers and it is designed to support

variable bit rate transport channels, to offer so-called bandwidth-on-demand services and to be

able to multiplex several services within the same Radio Resource Control (RRC) connection [8,

23, 43].

The basic idea in WCDMA is that the signal to be transferred over the radio path is formed by

multiplying the original baseband digital signal with another signal, which has a much greater bit

rate. This operation is called channelization and the number of chips per data symbol is called the

Spreading Factor (SF) [12, 23]. We need to make a clear separation between the different kinds of

bits in WCDMA, one bit of baseband digital signal, the actual information, is called a symbol. On

the other hand, one bit of code signal used for signal multiplying is called a chip. The code signal

bit rate, i.e. the chip rate, is fixed in WCDMA being 3.84 million chips per second (3.84 Mcps)

[23, 41, 43]. The symbol rate indicates how many data symbols are transferred over the radio path

and it is expressed as kilosymbols per seconds (ks/s).

Differences between WCDMA and Second Generation Air Interfaces

The second generation systems were built mainly to provide speech services in macro cells [4, 12].

To understand the background to the differences between second and third generation systems, we

will look at the new requirements of the third generation systems which are listed below:

• Bit rates up to 2 Mbps;

• Variable bit rate to offer bandwidth on demand;

• Multiplexing of services with different quality requirements on a single connection, e.g.

speech, video and packet data;

• Delay requirements from delay-sensitive real time traffic to flexible best-effort packet data;

• Quality requirements from 10 % frame error rate to 10-6 bit error rate;

62

• Co-existence of second and third generation systems and inter-system handovers for

coverage enhancements and load balancing;

• Support of asymmetric uplink and downlink traffic, e.g. web browsing causes more loading

to downlink than to uplink;

• High spectrum efficiency;

• Co-existence of FDD and TDD modes [12, 23, 41, 43].

Table 2.1: Differences between WCDMA and GSM air interfaces [12] WCDMA GSM Carrier spacing 5 MHz 200 kHz

Frequency reuse factor 1 1–18

Power control frequency 1500 Hz 2 Hz or lower

Quality control Radio resource management algorithms

Network planning (frequency planning)

Frequency diversity 5 MHz bandwidth gives multipath diversity with Rake receiver

Frequency hopping

Packet data Load-based packet scheduling

Time slot based scheduling with GPRS

Downlink transmit diversity Supported for improving downlink capacity

Not supported by the standard, but can be applied

WCDMA Service Capability

WCDMA does not use the same principle as GSM with terminal class mark. The terminals tell the

network, upon connection set-up, a set of parameters indicating the radio access capabilities of the

particular terminals. This determines the maximum user data rate supported in a particular radio

configuration, given independently for the uplink and downlink directions [12, 23, 41].

63

• 32 kbps class. This is intended to provide a basic speech service, including AMR speech as

well as some limited data rate capabilities up to 32 kbps.

• 64 kbps class. This is intended to provide a speech and data service, with simultaneous data

and AMR speech capability.

• 144 kbps class. This class has the air interface capability to provide, for example, video

telephony or various other data services [12, 23].

• 384 kbps class is being further enhanced from 144 kbps and has, for example, multicode

capability, which points toward support of advanced packet data methods provided in

WCDMA.

• 768 kbps class has been defined as an intermediate step between 384 kbps and 2 Mbps

class.

• 2 Mbps class. This is the state-of-the-art class and has been defined for the downlink

direction only but also possible for uplink [8, 12, 23].

2.7.1 Power Control

In WCDMA technology, power control is critical because it ensures that just enough power is used

to close the links, either downlink, from the Node B to the mobile device, or uplink, from the

mobile to the Node B. Of the two links, the uplink is more critical because it ensures that all

instances of UE are detected at the same power by the cell; thus each UE contributes equally to the

overall interference and no single UE will overpower and consequently desensitize the receiver [8,

12, 23]. Without power control, a single UE transmitting at full power close to the Node B would

be the only one detected while the others would be drowned out by the strong signal of the close

user who creates a disproportionate amount of interference. On the downlink, power control serves

64

a slightly different purpose, because the Node B’s power must be shared among common channels

and the dedicated channels for all active users [12, 23, 41]. Similarly, all channels are orthogonal

to each other with the exception of the synchronization channel; thus the signal, or power, from

any channel is not seen as interference. Ideally, the other channels do not affect the sensitivity;

however, power control is still required to ensure that a given channel is using only the power that

it needs which increases the power available for other users, effectively increasing the capacity of

the system. Conceptually, two steps are required for power control:

• Estimate the minimum acceptable quality.

• Ensure that minimum power is used to maintain this quality.

Outer loop power control handles the first step while inner loop handles the second [8, 12, 23].

Ideally, the outer loop should monitor the Block Error Rate (BLER) of any established channel and

compare it to the selected target. If they differ, the quality target, estimated in terms of Signal-to-

Interference Ratio (SIR), is adjusted. The closed loop power control can then compare, on a slot-

by-slot basis, the measured and target SIR, and send power-up or power-down commands. Power

control processes run independently in the uplink and downlink, each signaling to the other the

required adjustment by means of Transmit Power Control (TPC) bits [8, 23]. The outer loop, on

the other hand, is not as strictly controlled by the standard and is thus implementation-dependent:

neither its rate nor the step sizes are signaled to the other end. Moreover, although the purpose of

the closed loop is to ensure that the BLER target is met, the implementation may be based on other

measurements such as SIR, or passing or failing the Cyclic Redundancy Check (CRC) [8, 12, 23,

43].

65

2.7.2 Handoff

Handoff basically involves change of radio resources from one cell to another adjacent cell. From

a handoff perspective, it is important that a free channel is available in a new cell whenever

handoff occurs so that undisrupted service is available [12, 23, 24]. It takes place either within the

same node B, inter node B within the same RNC, inter RNC within the same MSC or between

different MSCs and there are different reasons for the handover to become necessary [6, 8, 12].

Handoff is as important for UMTS as any other form of cellular telecommunications system and it

is essential that UMTS handoff is performed seamlessly so that the user is not aware of any

change. Any failures within the UMTS handoff procedure will lead to dropped calls which will in

turn result in user dissatisfaction and ultimately it may lead to users changing networks, thereby

increasing the churn rate [8, 24]. A RAKE receiver is a form of radio receiver that has been made

feasible in many areas by the use of digital signal processing, which supports handoff in UMTS

network. It is often used to overcome the effects of multipath propagation and achieves this by

using several sub-receivers known as "fingers" which are given a particular multipath component.

Each finger then processes its component and decodes it [6, 12]. The resultant outputs from the

fingers are then combined to provide the maximum contribution from each path. In this way rake

receivers and multipath propagation can be used to improve the signal to noise performance.

Hard Handoff: The name hard handoff indicates that there is a "hard" change during the handoff

process usually a “break before make”. For hard handoff the radio links are broken and then re-

established. Although hard handoff should appear seamless to the user, there is always the

possibility that a short break in the connection may be noticed by the user [6, 11, 17, 24].

The basic methodology behind a hard handoff is relatively straightforward. There are a number of

basic stages of a hard handoff:

66

• The network decides a handoff is required dependent upon the signal strengths of the

existing link, and the strengths of broadcast channels of adjacent cells.

• The link between the existing node B and the UE is broken.

• A new link is established between the new node B and the UE.

Although this is a simplification of the process, it is basically what happens. The major problem is

that any difficulties in re-establishing the link will cause the handoff to fail and the call or

connection to be dropped [6, 11, 24].

UMTS hard handoffs may be used in a number of instances:

• When moving from one cell to an adjacent cell that may be on a different frequency.

• When implementing a mode change, e.g. from FDD to TDD mode.

• When moving from one cell to another where there is no capacity on the existing channel

and a change to a new frequency is required [11, 24].

One of the issues facing UMTS hard handoffs as also experienced in GSM is that, when usage

levels are high the capacity of a particular cell that a UE is trying to enter may be insufficient to

support a new user. To overcome this, it may be necessary to reserve some capacity for new users.

This may be achieved by spreading the loading wherever possible - for example UEs that can

receive a sufficiently strong signal from a neighboring cell may be transferred out as the original

cell nears its capacity level [6, 11, 24].

67

Figure 2.12: Hard handoff procedure [24]

Soft Handoff: Soft handoff is a form of handoff that was enabled by the introduction of CDMA

which occurs when a UE is in the overlapping coverage area of two cells. Links to the two base

stations can be established simultaneously and in this way the UE can communicate with two base

stations, by having more than one link active during the handoff process. This provides a more

reliable and seamless way in which to perform handoff [8, 12, 24].

In view of the fact that soft handover uses several simultaneous links, it means that the adjacent

cells must be operating on the same frequency or channel. Since the UEs do not have multiple

transmitters and receivers that would be necessary, if they were operating on different frequencies.

When the UE and node B undertake a soft handoff, the UE receives signals from the two node B’s

and combines them using the RAKE receiver capability available in the signal processing of the

UE [12, 24].

In the uplink the situation is more complicated as the signal combining cannot be accomplished in

the node B as more than one node B is involved. Instead, combining is accomplished on a frame

by frame basis; the best frames are selected after each interleaving period. The selection is

accomplished by using the outer loop power control algorithm which measures the signal to noise

68

ratio (SNR) of the received uplink signals. This information is then used to select the best quality

frame. Once the soft handoff has been completed, the links to the old node B are dropped and the

UE continues to communicate with the new node B [8, 12].

As can be imagined, soft handoff uses a higher degree of the network resources than a normal link,

or even a hard handover. However this is compensated by capacity maximization, improved

reliability and performance of the handoff process [8, 11, 23].

Figure 2.13: Soft handoff procedure [24]

Softer Handoff: A form of handoff referred to as softer handoff is really a special form of soft

handoff. It occurs when the new radio links that are added are from the same node B. These may

occur when several sectors are served from the same node B, thereby simplifying the combining as

it can be achieved within the node B and not require linking further back into the network [8, 12].

UMTS softer handoff is only possible when a UE can hear the signals from two sectors served by

the same node B. This occurs as a result of the sectors overlapping, or more commonly as a result

of multipath propagation resulting from reflections from buildings, etc [8, 12].

In the uplink, the signals received by the node B, and the signals from the two sectors can be

routed to the same RAKE receiver and then combined to provide an enhanced signal.

69

In the downlink, it is a little more complicated because the different sectors of the node B use

different scrambling codes. To overcome this, different fingers of the RAKE receiver apply the

appropriate de-spreading or de-scrambling codes to the received signals. Once this has been done,

they can be combined as before [12].

In view of the fact that a single transmitter is used within the UE, only one power control loop is

active. This may not be optimal for all instances but it simplifies the hardware and general

operation [8, 12, 23].

2.7.3 Channelization Codes

The channelization codes are sequences of chips that are applied to the data to be transmitted to

produce a stream of chips that have the data superimposed upon them. The channelization codes

are of relatively short length that can vary depending upon the desired transmitted data rate, and

are made from something referred to as an orthogonal function or waveform [8, 12, 23]. It is the

properties of orthogonality that are particularly important for the channelization code. It comprises

a sequence of 1s and 0s and the duration of these 1s and 0s is known as the chip period, and the

number of chips per second is the chip rate which is 3.84 Mcps. In the uplink, the channelization

code is used to control the data rate that a user transmits to the cell. In the downlink, it is used to

separate users within a cell and also to control the data rate for that user [8, 23].

2.7.4 Scrambling codes

In scrambling operation, a scrambling code is applied to the signal, which makes signals from

different sources separable from each other. Scrambling is used on top of spreading and it

separates terminals or base stations from each other. The symbol rate is not affected by the

scrambling operation [23, 41]. In contrast to channelization codes, scrambling codes are quite long

and are created from streams that are generally referred to as pseudo-noise sequences. In the

70

uplink the scrambling code is used for two main reasons; the first is to separate the users on the

uplink with each user being assigned a scrambling code that is unique within the UTRAN. The

second reason is to provide a mechanism to control the effects of interference both from within the

cell, intra-cell interference as well as from other adjacent cells inter-cell interference, while on the

downlink is used to control the effects of interference [8, 23].

The use of channelization codes and scrambling codes is different on the uplink and the downlink.

This means that the radio interface is asymmetrical, with different functions provided by the codes

in the different directions of the links [8, 23, 41].

Fig 2.14: Spreading and scrambling [41]

2.7.5 Code allocation

For the uplink, the channelization code is selected by the UE based on the amount of data required

for transmission. The specifications define a relationship between required data rate and code

selection. The scrambling code on the uplink is selected and assigned by the UTRAN when the

physical channel is established or possibly when it is re-configured. For the downlink, the

channelization code is allocated by the UTRAN from the ones that are available in that cell at that

point when an allocation is required. The scrambling code on the downlink is used within that cell

and by more than one UE in the cell. The scrambling code is likely to be assigned as part of the

71

radio planning function, but a cell receives the allocation from the operations and maintenance

entity [8, 12, 23].

2.8 Radio Resource Management

Radio resource management (RRM) is the system level control of co-channel interference and

radio transmission characteristics in cellular network. It is a set of algorithms that control the usage

of radio resources. Its management techniques are used to improve the utilization of radio

resources in order to provide maximum system capacity of the cellular network [9, 11, 24]. A

Radio Resource Unit (RRU) is defined as a set of basic physical transmission parameters necessary

to support a signal waveform transporting end user information corresponding to a reference

service [9, 11]. Particularly:

In Frequency Division Multiple Access (FDMA), a radio resource unit is equivalent to a certain

bandwidth within a given carrier frequency, for example, in Total Access Communication System

(TACS), a radio resource unit is a 25 KHz portion in the 900 MHz band.

In Time Division Multiple Access (TDMA), a radio resource unit is equivalent to a pair consisting

of a carrier frequency and a time slot. For example, in GSM a radio resource unit is a 0.577 ms

time slot period every 4.615 ms on a 200 KHz carrier in the 900 MHz, 1800 MHz or 1900 MHz

bands [11, 24].

In Wideband Code Division Multiple Access (WCDMA), a radio resource unit is defined by a

carrier frequency, a code sequence and a power level. The main difference arising here with

respect to other techniques is that, the required power level necessary to support a user connection

is not fixed, but depends on the interference level which makes the capacity of the WCDMA

network interference limited [9, 11].

72

Radio Resource and QoS management functionalities are very important in the framework of

WCDMA based systems because the system relies on them to guarantee a certain target QoS,

maintain the planned coverage area and offer a high capacity. Objectives which tend to be

contradictory for instance, capacity may be increased at the expense of a coverage reduction;

capacity may be increased at the expense of a QoS reduction, etc. Radio network planning

provides a thick tuning of these elements, while RRM will provide the fine tuning mechanisms that

allow a final matching [9, 11]. In WCDMA, users transmit at the same time and frequency by

means of different spreading sequences, which in most of the cases are not perfectly orthogonal.

Consequently, there is a natural coupling among the different users that makes the performance of

a given connection much more dependent on the behavior of the rest of the users sharing the radio

interface compared with other multiple access techniques like FDMA or TDMA [9, 11, 12]. In this

context, RRM functions are crucial in WCDMA because there is not a constant value for the

maximum available capacity, since it is tightly coupled to the amount of interference in the air

interface. Although an efficient management of radio resources may not involve an important

benefit for relatively low loads, when the number of users in the system increases to a critical

number, good radio resources management will be absolutely necessary [9, 11, 12]. RRM

functions can be implemented in many different algorithms, and these impacts on the overall

system efficiency and on the operator infrastructure cost, so RRM strategies play an important role

in WCDMA UMTS scenario.

In general terms, real time services have more stringent QoS requirements compared to non real

time applications and, consequently, the former will require more investment by the network

operator than the latter. Nevertheless, if the amount of available radio resources is too low, non

real time users may experience a non-satisfactory connection, usually in terms of an excessive

73

delay. Then, it will be necessary for the network operator to set some target QoS values for non

real time applications as well [11, 23].

Objectives of RRM

• Maximize performance of all users with coverage capacity

• Guarantee the quality of service for different applications

• Maintain planned coverage

• Maximize system capacity

RRM Algorithms

The basic RRM algorithms can be classified as follows:

• Handoff and mobility management algorithm,

• Call admission control (CAC) algorithm, and

• Power control algorithm.

If a new call is admitted to access the network, then the CAC algorithm will make a decision to

accept or reject it according to the amount of available resources versus users QoS requirements,

and the effect on QoS of exiting calls that may occur as a result of the new call [9, 23]. If the call is

accepted the following has to be decided: transmission (bit) rate, node B, and channel assignment

and transmission power. Most of these resources have to be dynamically controlled during the

transmission. For example, the node B assignment has to be changed as the UE moves further

away from the node B. The handoff algorithm takes care of the re-assignment of node Bs. When

moving closer to the node B, the same received signal strength (RSS) can be upheld for a lower

transmitted power [9, 11, 12]. Thus, efficient power control algorithm is needed to reduce the

transmission power and to keep the interference levels at a minimum in the system, in order to

74

provide the required QoS and to increase the system capacity. Since available bandwidth (radio

resource spectrum) in cellular communication is limited, it is important to utilize it efficiently. For

this reason, frequencies are reused in different cells in the system. This can be done as long as

different users are sufficiently spaced apart, to ensure that interference caused by transmission by

other users will be negligible [9, 11, 12, 23].

2.8.1 Resource Allocation

As the number of users is increasingly growing, the wireless network should serve as many users

as possible, given the limited resources, an example being the bandwidth. On the other hand, as

various service types, such as voice, video, and data, are being offered, quality of service (QoS) is

highly demanded [9, 17, 21]. Obviously, these two requirements are competing against each other,

and to strike a good balance between these two competing requirements, resource allocation plays

a key role. Essentially, resource allocation is responsible for efficient utilization of network

resources while providing QoS guarantees to various applications. However, this goal is not easy

to achieve, since resource allocation is confronted with more difficulty such as error-prone

wireless channel, limited bandwidth, and mobility in mobile wireless networks than in wired

networks [9, 17, 21, 24].

Traffic channel allocation in a cellular system is important from the performance point of view,

which usually covers how a node B should assign traffic channels to the UE’s. As the channels are

managed only by the node B of a cell, a user attempting to make a new call needs to submit a

request for a channel, and the node B can grant such an access to the UE provided that a channel is

readily available for use by the node B [17, 21, 24]. If this is possible, most of the time the

probability that a new call will be blocked or the blocking probability for a call originated in a cell

75

can be minimized. One way to ascertain such a radio resource to be free is to increase the number

of channels per cell, if this is done, then every cell would expect to have a larger number of

channels. However, because a limited frequency band is allocated for wireless cellular networks,

there is a limit to the maximum number of channels. Therefore, there is restriction to the number of

available traffic channels that can be assigned to each cell, especially because of the interference

limited nature of WCDMA based systems [17, 21, 24]. Channel allocation implies that a given

radio spectrum is to be divided into a set of disjoint channels, which can be used simultaneously by

different UEs, while interference in adjacent traffic channels could be minimized by having good

separation between traffic channels.

2.8.1.1 Methods of Resource Allocation

There are basically three methods of resource allocation the includes;

• Fixed Channel Allocation (FCA)

• Dynamic Channel Allocation (DCA) and

• Hybrid Channel Allocation (HCA)

Fixed Channel Allocation

In FCA schemes, a set of traffic channels is permanently allocated to each cell of the system. If the

total number of available channels in the system is divided into sets, the minimum number of

channel sets required to serve the entire coverage area is related to the frequency reuse distance.

One approach to address increased traffic of originating and handoff calls in a cell is to temporarily

borrow free traffic channels from neighboring cells [21, 24]. There are many possible channel-

borrowing schemes, from simple to complex, and they can be selected based on employed

controller software and the feasibility of borrowing under given conditions.

76

Dynamic Channel Allocation

DCA implies that traffic channels are allocated dynamically as new calls arrive in the system. It is

achieved by keeping all free channels in a central pool, which means that when a call is completed,

the channel currently being used is returned to the central pool [21, 24]. In this way, it is fairly

straightforward to select the most appropriate channel for any new call with the aim of minimizing

the interference. Since the allocation of different traffic channels for a current traffic is known,

then, a DCA scheme overcomes the problem of an FCA scheme. A free channel can be allocated to

any cell, as long as interference constraints in that cell can be satisfied [21, 24]. The selection of a

channel could be very simple or could involve one or more considerations, including future

blocking probability in the vicinity of the cell, reuse distance, usage frequency of the candidate

channel, average blocking probability of the overall system, and instantaneous channel occupancy

distribution [21, 24].

Hybrid Channel Allocation

HCA schemes are a combination of FCA and DCA schemes, with the traffic channels divided into

fixed and dynamic sets. This means that each cell is given a fixed number of channels that is

exclusively used by the cell. A request for a channel from the dynamic set is initiated only when a

cell has exhausted using all channels in the fixed set. A channel from the dynamic set can be

selected by employing any of the DCA schemes [9, 21, 24].

2.8.2 Radio Resources

Radio resources in wireless cellular networks such as radio frequency spectrum (bandwidth),

transmit powers, transmission bit rate and base stations are generally limited due to the physical

and regulatory restrictions and also the interference-limited nature of wireless cellular networks [9,

77

12, 24]. Thus, to provide communication services with high capacity and good quality of QoS, it is

imperative to employ efficient and effective methods for sharing the radio spectrum. Spectrum

sharing methods are called multiple access techniques. Multiple access technique involves radio

channel allocation to users of the system [9, 12, 24]. The objective of multiple access techniques is

to provide communication services with sufficient bandwidth when the radio spectrum is shared

with many simultaneous users. A channel can be thought of as a portion of the radio spectrum that

is temporarily allocated for a specific purpose, such as user’s phone call [8, 9, 24].

2.8.2.1Types of Radio Channels

There are different types of radio channel which enables a flexible architecture that allows the

provision of services by making use of different configuration of the radio interface. Thus it

becomes possible to accommodate different degrees of quality of service [8, 11, 12, 23]. They are

the logical, transport and physical channels. By using these channels it is possible to carry the data

for the control and payload in a structured manner and provide efficient effective communications.

Therefore the 3G UMTS channels are thus an essential element of the overall system.

Logical channels: These channels allow communication between the RLC and MAC layers, and

they are characterized by the type of information that is being transferred across these layers. As a

result, there are logical channels for the transfer of user traffic, and also logical channels for the

transfer of control information, which can be either dedicated to specific users or common to a set

or to all of them [8, 11, 12, 17].

Transport channels: These are defined between MAC and PHY layers and they specify how the

information from logical channels should be adapted to get access to the radio transmission

medium. Therefore, they define the format used for the transmission in terms of, channel coding,

interleaving or bit rate [8, 11, 12]. Different transport channels are defined, mainly distinguishing

78

between transport channels operating in dedicated mode i.e. allocated to a specific user and in

common mode i.e. users should contend for the access to such channels whenever they have some

information to be transmitted [11, 17].

Physical channels: They are defined in the physical layer and specify the nature of the signals that

are transmitted either in the uplink or in the downlink direction. These include code, time and

frequency multiplexed with the signals coming from other users and nodes B. Physical channels

include also physical signals, which serve as a support for the transmission on the physical

channels e.g. supporting the random access procedures but do not contain information from upper

layers [11, 12, 17].

UMTS Logical Channels

The 3G logical channels include:

• Broadcast Control Channel (BCCH) (downlink): This channel broadcasts information

to UEs relevant to the cell, such as radio channels of neighboring cells, etc.

• Paging Control Channel (PCCH) (downlink): This channel is associated with the PICH

and is used for paging messages and notification information [11, 12].

• Dedicated Control Channel (DCCH) (up and downlinks): This channel is used to carry

dedicated control information in both directions.

• Common Control Channel (CCCH) (up and downlinks): This bi-directional channel is

used to transfer control information.

• Shared Channel Control Channel (SHCCH) (bi-directional): This channel is bi-

directional and only found in the TDD form of WCDMA / UMTS, where it is used to

transport shared channel control information.

79

• Dedicated Traffic Channel (DTCH) (up and downlinks): This is a bidirectional channel

used to carry user data or traffic.

• Common Traffic Channel (CTCH) (downlink): A unidirectional channel used to

transfer dedicated user information to a group of UEs [11, 12].

UMTS Transport Channels

The 3G transport channels include:

• Dedicated Transport Channel (DCH) (up and downlink). This is used to transfer data to

a particular UE. Each UE has its own DCH in each direction.

• Broadcast Channel (BCH) (downlink). This channel broadcasts information to the UEs

in the cell to enable them to identify the network and the cell.

• Forward Access Channel (FACH) (down link). This is channel carries data or

information to the UEs that are registered on the system. There may be more than one

FACH per cell as they may carry packet data [8, 11, 12].

• Paging Channel (PCH) (downlink). This channel carries messages that alert the UE to

incoming calls, SMS messages, data sessions or required maintenance such as re-

registration.

• Random Access Channel (RACH) (uplink). This channel carries requests for service

from UEs trying to access the system

• Uplink Common Packet Channel (CPCH) (uplink). This channel provides additional

capability beyond that of the RACH and for fast power control.

• Downlink Shared Channel (DSCH) (downlink).This channel can be shared by several

users and is used for data that is "bursty" in nature such as that obtained from web

browsing etc [8, 11, 12].

80

UMTS Physical Channels

The 3G UMTS physical channels include:

• Primary Common Control Physical Channel (PCCPCH) (downlink). This channel

continuously broadcasts system identification and access control information.

• Secondary Common Control Physical Channel (SCCPCH) (downlink) This channel

carries the Forward Access Channel (FACH) providing control information, and the Paging

Channel (PACH) with messages for UEs that are registered on the network.

• Physical Random Access Channel (PRACH) (uplink). This channel enables the UE to

transmit random access bursts in an attempt to access a network.

• Dedicated Physical Data Channel (DPDCH) (up and downlink). This channel is used to

transfer user data [8, 11, 12].

• Dedicated Physical Control Channel (DPCCH) (up and downlink): This channel carries

control information to and from the UE. In both directions the channel carries pilot bits and

the Transport Format Combination Identifier (TFCI). The downlink channel also includes

the Transmit Power Control and FeedBack Information (FBI) bits.

• Physical Downlink Shared Channel (PDSCH) (downlink): This channel shares control

information to UEs within the coverage area of the node B.

• Physical Common Packet Channel (PCPCH): This channel is specifically intended to

carry packet data. In operation the UE monitors the system to check if it is busy, and if not

it then transmits a brief access burst. This is retransmitted if no acknowledgement is gained

with a slight increase in power each time. Once the node B acknowledges the request, the

data is transmitted on the channel [8, 11, 12].

81

• Synchronization Channel (SCH): This channel is used in allowing UEs to synchronize

with the network.

• Common Pilot Channel (CPICH): This channel is transmitted by every node B so that the

UEs are able estimate the timing for signal demodulation. Additionally they can be used as

a beacon for the UE to determine the best cell with which to communicate.

• Acquisition Indicator Channel (AICH) : The AICH is used to inform a UE about the Data

Channel (DCH) it can use to communicate with the node B. This channel assignment

occurs as a result of a successful random access service request from the UE.

• Paging Indication Channel (PICH): This channel provides the information to the UE to

be able to operate its sleep mode to conserve its battery when listening on the Paging

Channel (PCH). As the UE needs to know when to monitor the PCH, data is provided on

the PICH to assign a UE a paging repetition ratio to enable it to determine how often it

needs to 'wake up' and listen to the PCH [8, 11, 12].

• CPCH Status Indication Channel (CSICH): This channel, which only appears in the

downlink carries the status of the CPCH and may also be used to carry some intermittent or

"bursty" data. It works in a similar fashion to PICH.

• Collision Detection/Channel Assignment Indication Channel (CD/CA-ICH): This

channel, present in the downlink is used to indicate whether the channel assignment is

active or inactive to the UE.

82

2.9 Call Admission Control

The CAC is an algorithm that manages radio resources in order to adapt to traffic variations. CAC

is always performed when a mobile initiate’s communication in a new cell either through a new

call or handoff, furthermore, admission control is performed when a new service is added during

an active call. CAC makes a decision to accept or reject a new call according to the amount of

available resources versus user QoS requirements, and the effect on the QoS of existing calls that

may occur as a result of the new call [9, 11, 45].

A connection is accepted if resources are available and the requested QoS can be met, and if other

existing connections and their agreed upon QoS will not be adversely affected. Moreover, the

admission control algorithm ensures that the interference created after adding a new call does not

exceed a pre-specified threshold. The purpose of an admission control algorithm is to regulate

admission of new users into the system, while controlling the signal quality of the already serviced

users without leading to call dropping [9, 11, 45]. The admission control algorithm will then

balance between high capacity and interference. Another goal of admission control is to optimize

the network revenue. This can, for example, be done by maximizing the instantaneous reward

achievable when a new service request arrives. The reward associated with each QoS level is

assumed to increase with the amount of resources required for the service [9, 11, 45].

In a WCDMA scenario, where there is no hard limit on the system capacity, admission control

must operate dynamically depending on the amount of interference that each radio access bearer

adds to the rest of the existing connections.

From the performance point of view, there are different indicators to evaluate and compare

admission control algorithms. Typically, the admission probability that is the probability that a

new connection is accepted or equivalently the blocking probability that is, the probability that the

83

new connection is rejected is used as a measurement of the accessibility to the system provided by

a certain algorithm [9, 11, 45]. Therefore, admission control algorithms must take into

consideration that the amount of radio resources needed for each connection request will vary;

similarly, the QoS requirements in terms of real time or non real time transmission should also be

considered in an efficient admission control algorithm. Clearly, admission conditions for non real

time traffic can be more relaxed on the assumption that the additional radio resource management

mechanisms complementing admission control will be able to limit non real time transmissions

when the air interface load is excessive [9, 11, 45].

2.9.1 CAC Design Considerations

When designing a call admission control (CAC) scheme, several issues are taken into

consideration. First, handoff connection requests needs to be given higher priority than new

connection requests. As it is well known, a handoff request occurs when a user engaged in a call

connection moves from one cell to another. To keep the QoS contract agreed during the connection

setup stage, the network should provide uninterrupted service to the previously established

connection [9, 11, 45]. However, if the new cell does not have enough resources, the ongoing

connection will be forced to terminate before normal completion. Since mobile users are more

sensitive to the termination of an ongoing connection than the blocking of a new call connection,

handoff call connections are usually given higher priority over new call connections.

Second, since the various services offered by the network have inherently different traffic

characteristics, their QoS requirements may differ in terms of bandwidth, delay, and connection

dropping probabilities. It is the network’s responsibility to assign different priorities to these

services in accordance with their QoS demands and traffic characteristics [9, 11, 45]. Finally, when

84

there are multiple types of services coexisting in the network, it is critical that the network can

provide fairness among those services in addition to satisfying their specific QoS requirements.

Thus, the network needs to fairly allocate network resources among different users such that

differentiated QoS requirements can be satisfied for each type of service independent of the others

[9, 11, 45]. Accordingly, the interest lies in two connection-level QoS metrics, namely, the new

call blocking probability (CBP) which is the system capacity measurement and the handoff call

dropping probability (CDP) which is the system quality measurement. Blocking occurs when a

new user is denied access to the system, while call dropping means that a call of an existing user is

terminated, call dropping is considered to be more costly than blocking. In addition, the system

utilization is also considered.

2.9.2 Multiple Service Types

A variety of applications, such as voice, video, and data, are to be supported with QoS guarantee in

WCDMA networks. Due to their different traffic characteristics, they may have different QoS

requirements in terms of delay, delay jitter, bit error rate (BER) [9, 11, 24, 45]. In UMTS, four

traffic classes are supported:

Conversational: This class of traffic has stringent requirements on delay and delay jitter, although

they are not very sensitive to BER. Typical applications include voice and video telephony.

Streaming: Real-time streaming video belongs to this class. Usually, it is less sensitive to delay or

delay jitter than the conversational class.

Interactive: Applications belonging to this class include web browsing and database retrieval.

Typically, the response time they require should be within a certain range and the BER should be

very low since the payload content should be preserved.

85

Background: It may include data services like email or file transfer. For these services, the

destination can tolerate delays ranging from seconds to minutes. However, data to be transferred

has to be received error-free [9, 11, 24].

2.10 Related works

The existing resource sharing schemes are based on two main categories, complete sharing and

complete partitioning. In complete sharing a user is always offered access to the network provided

that there are sufficient resources at the time of request, and all traffic classes share the resources

indiscriminately. In complete partitioning, the available channels or resources are partitioned such

that for each call class, only a fixed section of the resource is available. Therefore the calls are

accepted whenever there are available resources in their corresponding partition otherwise they are

blocked or queued [9, 21, 24, 41].

Several uplink CACs designed for 3G WCDMA have been proposed in the literature [25-38].

These CACs can be classified based on the admission criterion into the following four categories:

power-based CAC, through-put based CAC, interference-based CAC and signal-to-interference

(SIR) CAC. For power-based CAC algorithms the total received power is monitored, while

throughput-based CACs monitor the system load. Interference-based CAC algorithms monitor the

total received interference, and SIR-based CACs monitor the SIR figure experienced by every

user. A reserved capacity for WCDMA is defined as a fraction of cell capacity in terms of the total

interference referred to as interference margin (IM) or in terms of total load referred to as the load

margin (LM).

A CAC algorithm using multiple power-based thresholds for multiple services was proposed in

[31, 32]. By setting a higher priority for voice traffic, the voice traffic is given a higher priority

86

compared to data traffic. An interference based admission control strategy with multiple

interference margin (IM) was analyzed where only two classes of traffic were considered in [33,

34]. A throughput-based admission control with multiple load margin (LM) was proposed where

four classes of traffic were considered in [35]. Recently, dynamic-threshold schemes have been

discussed in the literature to improve the QoS guarantees for higher priority calls [36, 38]. A

throughput-based algorithm that allows different adaptive LM for newly originating and handoff

calls was proposed in [36], and the LM value is adapted using the arrival rates and the estimation

of the blocking probability. The IM needed for high priority calls is estimated by using the signal-

to-noise interference ratio and the (SNR) and distance information of mobile users in neighbouring

cells as seen in [37]. Radio resource management (RRM) in each node B estimates the amount of

IM by considering traffic load in its current cell as well as traffic conditions in neighbouring cells

[38]. All these schemes introduce a large communication and processing overhead in order to keep

up-to-date information about the neighbouring cells, moreover the queuing techniques were not

used.

The above CAC techniques have several disadvantages; the focus is mainly on prioritization using

different fixed or dynamic threshold values for interference margin or load margin without using

buffering techniques. The major limitation of fixed threshold schemes is that the reserved capacity

for higher priority classes may remain unutilized while lower priority classes are being blocked.

Most of the dynamic schemes rely on changing the threshold value based on periodic estimation in

order to decrease the failure of higher priority handoff calls at the expense of lower priority new

calls, this introduces large communication and processing overhead in order to keep up-to-date

information about the state of neighboring cells and therefore limits the scalability. In addition this

will increase the threshold when the estimates indicate high handoff traffic loads without giving a

87

more balanced performance between new calls and handoff calls. Finally, these schemes do not

provide detailed classification of calls based traffic type that is, real time and non-real time and

request type that is originating and handoff call, and no attempt is made to employ queuing for all

classes of calls.

The performance of complete sharing based CAC and complete partitioning based CAC was

compared with the performance of dynamic prioritized uplink call admission control by calculating

total current usage load occupied by each connected call class and also by calculating the dynamic

priority value for the respective call classes present in the queue [39]. It also presented the

utilization and grade of service for real-time and non-real time calls in the system. Maximum

Shannon capacity is estimated, alongside the outage probability and signal to interference ratio

which are employed in single cell and multi-cell admission control scheme [46]. This does not

specifically address handoff traffic class.

This research work focuses on DP-CAC for handoff and new calls which are divided into four

traffic classes’ handoff real-time, handoff non-real time, new call real-time and new call non-real-

time respectively, giving higher priority to handoff traffic classes. DP-CAC uses FIFO queues for

the different traffic classes in order to minimize losses and also uses channel reservation for

handoff in order to reduce its dropping probability. The simulation model is built using MATLAB,

at high traffic condition the model switches handoff traffic to its reserved channel, and allows new

calls to go through the general server which provides fairness to new call traffic class and reduces

its blocking probability. Besides the evaluation of Qos metrics, system utilization, revenue and

grade of service, this research work also considers the queuing delay and the call

blocking/dropping probability of handoff and new calls in evaluating the performance of Dynamic

Priority Call admission control algorithm.

88

CHAPTER THREE

RESEARCH METHODOLOGY

3.0 ADOPTED NETWORK

The network adopted for this research is a typical WCDMA network, which is designed to meet

the objectives and requirements for 3G system. These objectives include support of general quality

of service (QoS), multimedia services and 2Mbps.

Support of general QoS

QoS in general is defined in terms of three quantities namely: data-rate, delay and error

characteristics. Data-rate is usually defined in terms of average data rate and peak data-rate (both

measured over some defined time period). It is the objective of the 3G network to offer a service to

a user that lies within the requested data rate. If the service were a constant data rate service such

as a 57.6 kb/s modem access, then the peak and the average data-rates would tend to be the same.

If, on the other hand, the service is an Internet access data service, then the peak and average data

rates could be quite different.

Delay: The second element of QoS is some measure of delay. In general, delay comprises two

parts. First, there is the type of delay, and second there is some measure of the magnitude of the

delay. The type of delay defines the time requirements of the service, such as whether it is a real-

time service (such as voice communications) or a non-real-time service (such as e-mail delivery).

The magnitude of the delay defines how much delay can be tolerated by the service. Bi-direction

services such as voice communications in general require low delay, while uni-directional services

such as e-mail delivery can accept higher delays, measured in terms of seconds. The objective for

the delay component of QoS, therefore, is to match the user’s service requirements for the delay to

the delay that can be delivered by the network. In a system operating correctly, the delay of the

data should correspond to the dela

Error characteristics: The final component of a typical QoS de

the end to-end link (by end-to-end it is assumed that the QoS is de

communicating users). The error characteristic de

frame error rate (FER). This is a measure of how many errors can be introduced across the link

before the service degrades below a level de

variable and dependent upon the service. Services such

be quite tolerant to errors. Other services, such as packet data for Web access, are very sensitive to

errors.

Support of multimedia services

The second requirement of the WCDMA network is to provide multimedia se

Multimedia is simply a collection of dat

application. The data streams that comprise this multimedia connection will, in general have

differing QoS characteristics.

Figure

the delay that can be delivered by the network. In a system operating correctly, the delay of the

data should correspond to the delay specified within the QoS.

component of a typical QoS definition is the error characteristics of

end it is assumed that the QoS is defined for the link between two

communicating users). The error characteristic defines items such as the bit error rate (B

frame error rate (FER). This is a measure of how many errors can be introduced across the link

before the service degrades below a level defined to be acceptable. The error characteristic is

variable and dependent upon the service. Services such as speech, for example, have been found to

be quite tolerant to errors. Other services, such as packet data for Web access, are very sensitive to

Support of multimedia services

The second requirement of the WCDMA network is to provide multimedia services to the users.

Multimedia is simply a collection of data streams between the user and some other end user’s

application. The data streams that comprise this multimedia connection will, in general have

Figure 3.1multimedia services [8]

89

the delay that can be delivered by the network. In a system operating correctly, the delay of the

finition is the error characteristics of

fined for the link between two

fines items such as the bit error rate (BER) or the

frame error rate (FER). This is a measure of how many errors can be introduced across the link

fined to be acceptable. The error characteristic is

as speech, for example, have been found to

be quite tolerant to errors. Other services, such as packet data for Web access, are very sensitive to

rvices to the users.

some other end user’s

application. The data streams that comprise this multimedia connection will, in general have

90

It is the objective of the network to allow the user to have such a multimedia connection, which

comprises a number of such data streams with different QoS characteristics, but all of them

multiplexed onto the same physical radio interface connection.

Support of 2Mbps

The final objective of the WCDMA network is the ability to provide data-rate up to 2Mbps. This

high data rate is required for certain types of application such as high quality video transmission

and high speed internet access.

3.1 ADOPTED NETWORK ARCHITECTURE

The WCDMA network architecture adopted for this research work comprises three major sections,

the mobile station, the radio access network and the core network. The mobile station comprises

the mobile equipment and the universal subscriber identity module (USIM). The Mobile

Equipment (ME) is the radio terminal used for radio communication over the Uu interface. The

USIM is a smartcard that holds the subscriber identity, performs authentication algorithms, and

stores authentication and encryption keys and some subscription information that is needed at the

terminal. The RAN generally is responsible for functions that relate to access, such as the radio

access, radio mobility and radio resource utilization. The Node B converts the data flow between

the Iub and Uu interfaces. It also participates in radio resource management. The Radio Network

Controller (RNC) owns and controls the radio resources in its domain (the Node Bs connected to

it). RNC is the service access point for all services provided by the core network, for example,

management of connections to the UE. The core network is responsible for the higher layer

functions such as user mobility, call control, session management and other network centric

functions such as billing, security control.

91

Figure 3.2WCDMA network architecture [42]

This research work is considering only one node in the WCDMA network architecture and that is

the node B. It is the first access point that communicates directly with the mobile station and the

RAN. Several activities take place in this node ranging from signalling, scheduling, load and

overload control, QoS provision and dynamic resource allocation. These activities constitute the

focus of this research work which makes the node B the central focus of the DP-CAC algorithm.

The Objectives of Dynamic Priority CAC Algorithm

• Ensure best system utilization and revenue while satisfying the required QoS and fairness.

At low and moderate traffic load, it ensures the best system utilization while QoS is

satisfied. At high load, it ensures the fairness of resource usage amongst different class

• Provide a scalable and easy to implement RRM procedure.

• Eliminate the requirement for traffic estimation and communication with neighbouring

cells.

• Support preferential treatment to higher priority calls by serving its queue first.

92

3.2 Physical Model

The physical model adopted for a WCDMA cellular network supporting heterogeneous traffic

assumes two types of services; real-time service (RT) such as conversational and streaming traffic

class and non-real-time service (NRT) such as interactive and background traffic. The priority

classes of incoming call requests are divided into four types, these types are: RT service handoff

request; NRT service handoff requests; newly originating RT calls; and newly originating NRT

calls.

Table 3.1 Service priority classes [12]

Class Traffic type Symbol Call class description 1 RT λh1 Conversational and Streaming – handoff

call 2 NRT λh2 Interactive and Background – handoff

call 3 RT λn1 Conversational and Streaming – new call

4 NRT λn2 Interactive and Background – new call

The capacity of WCDMA cell is defined in terms of the cell load where the load factor, ɳ, is the

instantaneous resource utilization while ɳmax is the maximum cell capacity. By using the concepts

of threshold and queuing techniques, each call class has its own FIFO queues with finite capacities.

A call class request is placed in its corresponding queue if it cannot be serviced upon its arrival and

assigned a resource when available based on its calculated priority.

93

Figure 3.3: DP-CAC physical model

The DP-CAC model employs queuing, prioritization, and channel reservation. Queuing is

necessary when the power level received by the node B in the current cell reaches a certain

threshold, namely the handoff threshold; a call is placed in the queue from the neighbour cell for

providing service which remains in the queue until either an available channel in the new cell is

found or the power by the node B in the current cell drops below a second threshold, called the

receiver threshold. Since handoff requests arrival process is Poisson, that is, calls arrive in an

exponential distribution, queuing is effective and also when traffic is high especially in a densely

populated area. Queuing is very beneficial in macro-cells with cell radius exceeding 35 km since

the UE can wait for handoff before signal quality drops to an unacceptable level.

Prioritization involves channel assignment strategies that allocate channels to handoff requests

more readily than new calls [45]. While in channel reservation a number of channels are reserved

FIFO Queue

ɳ4

ɳ2

ɳ3

ɳ1

Server ɳmax

λn2

λn1

λh2

λh1

DP-CAC

94

exclusively for handoff calls in a cell, the remaining channels are used among the new and handoff

calls as can be seen in the node B system model below (figure 3.3). This method not only

minimizes dropping of handoff calls, but also increases total carried traffic as well as it provides

optimal resource utilization.

Figure 3.4 node B system model

3.3 DP-CAC Algorithm

Resource utilization and individual QoS requirement can be improved by using the Dynamic

Priority Call Admission Control (DP-CAC) algorithm. The resource allocation to a traffic class can

be dynamically adjusted according to the traffic load variations and QoS requirements, while most

of the free capacity from the under-loaded traffic classes can be utilized by the over-loaded traffic

classes. Dynamic priority is implemented to protect from resource starvation.

The DP-CAC algorithm attempts to manage resource allocations amongst the different call classes,

and to efficiently utilize the resources while satisfying the QoS requirements. Similar to [31, 34,

36], only the uplink direction is considered in this research work where it is assumed that

whenever the uplink channel is assigned the downlink is established. To implement the admission

control for the WCDMA cellular networks, first an estimate of the total cell load must be

95

computed and then employed in the decision process of accepting or rejecting new connections,

the analysis also assumes perfect power control operation where the UE and its home node B use

only the minimum needed power in order to achieve the required performance. This algorithm is

used as a tool to maintain service continuity with QoS guarantees and also to provide service

differentiation to mobile users.

Table 3.2 Computation parameters [39] Parameter Equation Description 1 Load factor increment ∆ɳi =

������� /�� Gi = processing gain

ei = bit energy-to-noise density

2 Processing gain Gi = W/Ri W = chip rate, 3.84Mcps R = bit-rate of service class

3 Bit energy-to-noise density ei = Eb/No Eb = bit energy No = output noise

4 Current total load factor nc = (1+f) �

������ /ei���

���

α – activity factor ei – traffic intensity Bi – number of already connected class i call f – interference from adjacent cell

Table 3.3 Traffic Model

Arrival processes Poisson Arrival rates λh1, λh2, λn1, and λn2

Total arrival λ= λh1+ λh2+λn1+λn2

Channel holding time 1/μ

Traffic Intensity ρ = ��

Eb/No 5 and 2 for RT and NRT service respectively

R (Bit-rate) 12.2kbps and 256kbps for RT and NRT service

respectively.

96

Table 3.4 Performance measures [39]

KPI Equation Description 1 System Utilization U = � �ni ∗ �bi�

��� ni – average number of connections in each traffic class that the system can accept bi – bandwidth of connection = ∆ɳi

2 System Revenue r = � �ni ∗ �ri = bi� ���

3 Grade of service GoS = 1-PB PB– call blocking probability 4 Blocking Probability PB = ��� !

"! $ e%�& n = arrival at a time λ = no of arrival a unit time interval t = time in seconds

97

Flow chart for DP-CAC algorithm

Figure 3.5flow chart for DP-CAC algorithm

98

CHAPTER FOUR

SIMULATION RESULT

4.0 Simulation Model

Real Time Traffic Source (Voice): This module was realized by modelling Real Time traffic

class as Markov-Modulated Poisson traffic to cater for the active and silent behaviour of voice

using MATLAB. This module as shown in fig. 4.1 comprises Time Based Entity Generator which

generates calls at a transmission rate of 2.048Mbps using an intergeneration time of a statistical

distribution (Poisson). The generated call passes through an Enable Gate which is regulated by a

function call subsystem that is being controlled by an Entity Departure Event To Function Call

Block and whose basic function is to allow the carrier signal generated by a Time Based Entity

Generator that serves as an envelope for the cell generated thus providing a silent and active form

of traffic pattern for the voice traffic.

99

Figure 4.1: Real Time Traffic Source (Voice)

Non-Real Time Traffic Source (Data): This module was realized by modelling Non-Real time

traffic class also as Markov-Modulated Poisson traffic to cater for the ON-OFF behaviour of data

traffic using MATLAB. This module as shown in fig. 4.2 comprises Time Based Entity Generator

which generates traffic (containing the data to be transmitted) at a transmission rate of 10Mbps

using an intergeneration time of a statistical distribution (Poisson with an exponential mean

varying in this case). The generated traffic passes through an Enable Gate which is regulated by a

function call subsystem that is being controlled by an Entity Departure Event To Function Call

Block and whose basic function is to allow the carrier signal generated by a Time Based Entity

Generator that serves as an envelope for the traffic generated thus providing a ON-OFF form of

traffic Pattern for the data traffic.

100

Figure 4.2 Non-Real Time Traffic Source

Figure 4.3: Real Time Traffic Source (Video)

101

The flowchart model is developed into a simulation model using MATLAB/simulink. The

simulation model comprises four bursty sources of handoff calls and new calls λh1, λh2, λn1, and λn2

which represent the four different traffic classes’ classified as real time and non-real time traffic

respectively. The different traffic sources are combined together through a path combiner via a

switch which separates handoff calls from new calls and carries them through separate channels,

that is, handoff calls through another switch. This is combined with new calls in a path combiner to

the node B where they access radio resources. Once the maximum traffic load threshold is attained

for the general server in the node B the switch that outputs handoff calls to the path combiner is

alerted in order to block that output and switch to the channel reserved for handoff calls only. This

reduces the calls in the general server and allows new calls to be served even during high traffic

intensity which is where the fairness comes in.

Step-by-Step Conversion of Flowchart Modules

Start Initialize ɳMax Wait for call arrivals Call arrivals?

The start in the flow chart implies that in the simulation model the system is run, and the maximum

server capacity ɳMax of the node B is being initialized awaiting calls that will be arriving from the

different traffic sources. These calls arriving are combined in a path combiner with four inputs and

a single output as shown in fig 4.4.

Figure 4.4 Calls arriving from respective sources

102

If there is no call arrival the system returns to the initialization stage.

Handoff and New call Compute ɳc + ∆ɳi ɳc + ∆ɳi ≤ ɳmax

The single output of combined calls (handoff and new calls) from the path combiner is passed through an

output switch of single input and double output, which is used to separate the handoff calls from the new

calls. The separated calls are passed through their respective get attribute block, which outputs the value of

number of call arrivals (λh1&h2, and λn1&n2) that have departed from this block since the start of the

simulation. Since handoff calls have higher priority, it passes through a DP-CAC switch while the new calls

proceed directly to the path combiner with double input and single output as shown in fig 4.5.

Figure 4.5: Flow of traffic to DP-CAC switch

This model employs resource sharing by handoff and new calls in the node B, prioritization, and

channel reservation for handoff calls. Hence the DP-CAC switch has two outputs. One output

routes handoff calls to resources in the general server used by both traffic classes, while DP-CAC

switch monitors the p signal’s value throughout the simulation and reacts to changes by selecting

the corresponding entity output port to switch handoff calls to the reserved channel. Before these

calls can access resources in the node B, the system computes the load factor increment ∆ɳi for a

103

new call request i and the current system load, ɳc of calls in the general server. This is computed

using a computational module in the system, which uses the number of calls, arrived in the general

server at the node B and predefined simulation parameters in tables 3.2 and 3.3 respectively.

Figure 4.6 computational module

At the start of the simulation, the ɳc + ∆ɳi ≤ ɳmax is less than the maximum channel capacity of the

general server therefore calls arriving from handoff and new call request are allowed access to the general

server.

Figure 4.7 decision making model

Figure 4.7 above, is the decision making module of the DP-CAC algorithm. From the flowchart

model it is shown in fig 4.7 that handoff and new calls arrive in the system through the path

104

combine. The current total load factor ɳc in the general server and the load factor increment is

computed. The total value of the two factors is compared with the maximum load margin ɳmax

which is the maximum capacity of the general server. If the value is less than ɳmax, then all calls are

admitted in the general server, but if the value is greater than or equal to ɳmax , then the handoff calls

are separated from the new calls and switched to the channels reserved for handoff calls only. This

process is triggered by the auto-system functional block to the DP-CAC switch to ensure service

continuity to handoff calls which will in turn reduce the call dropping probability. The new calls

on the other hand are placed in their respective queue with finite capacity, while waiting for a free

channel in the general server. Queuing the new calls also help to minimize the call blocking

probability, as soon as there are free channels in the general server the new calls are served first,

thereby providing fairness to all traffic classes.

Figure 4.8 complete simulation model

105

4.1 Simulation Results and Discussion

The DP-CAC algorithm simulation is run using the complete simulation model and the traffic

model was employed to observe the rate of call arrival from the different traffic classes. The

channel holding time of the system and the departure rate of calls from the system after being

served were also recorded. The parameters obtained during the simulation of the model were

computed using a computational tool and the equations in the tables in chapter three were also

used where applicable.

System Utilization

The system utilization is computed using its equation and the results are plotted against the total

offered traffic. From figure 4.9 graph of system utilization (revenue), the results show that for DP-

CAC, at low and moderate offered traffic the system is utilized by all traffic classes even though it

is not the optimum capacity that is being utilized, which is expected for all systems at low and

moderate traffic conditions. At high offered traffic situation, DP-CAC dynamically controls the

priority level of queued calls and thus preventing new call traffic class from being adversely

affected by handoff call traffic class. The DP-CAC switch in the simulation model automatically

switches handoff calls to its reserved channel and admits the new calls into the general server

whenever there is an available channel. The system utilization increases as the offered traffic

increases. From figure 4.9 the result show clearly that at an offered traffic of 320 users per time the

capacity of the system is only 30% utilized, and as the offered traffic increases to 800 users, the

capacity of the system also increases to 80% utilization. As the offered traffic approaches its peak

known as congestion period at an offered traffic of 1520 users, the capacity utilization of the

system also attains it optimum at 95% and remains constant with further admission of users into

the system.

Figure 4.9: Graph of system capacity utilization (r

Revenue

The revenue is also computed using appropriate equation, and it is discovered that it has the same

curve as system utilization. It therefore follows that the higher and more efficient the system

capacity utilization, the higher the revenue generated

the traffic offered the higher the revenue generated.

0

10

20

30

40

50

60

70

80

90

100

0 200 400

Uti

liza

tio

n

Offered Traffic (calls/second)

of system capacity utilization (revenue) against offered traffic

The revenue is also computed using appropriate equation, and it is discovered that it has the same

curve as system utilization. It therefore follows that the higher and more efficient the system

capacity utilization, the higher the revenue generated by the system, indicating also that the higher

the traffic offered the higher the revenue generated.

600 800 1000 1200 1400 1600 1800

Offered Traffic (calls/second)

106

against offered traffic

The revenue is also computed using appropriate equation, and it is discovered that it has the same

curve as system utilization. It therefore follows that the higher and more efficient the system

, indicating also that the higher

1800

Revenue

107

Figure 4.10 shows the comparison between the results obtained for system capacity utilization

when DP-CAC algorithm is implemented and when it is not implemented in the system. Without

DP-CAC the different traffic classes are assigned separate channels of fixed capacity. Since traffic

conditions for handoff calls and new calls vary at every instance, the unutilized loading limits of

handoff calls cannot be used by the new call class and vice versa, therefore capacity is wasted

which results in inefficient total system utilization and loss of revenue. But when different traffic

classes are allowed to share radio resources, at low traffic condition the total cell capacity is

available for all arrived call classes to enhance the resource utilization while at high traffic

condition the dynamic priority is used to differentiate between handoff and new calls in order to

ensure fairness amongst traffic classes.

The graph of comparison between system capacity utilization with DP-CAC algorithm and without

DP-CAC algorithm figure 4.10 show that DP-CAC increases the utilization of the system. For

instance, at an offered traffic of 800 users, the system capacity utilization without DP-CAC

algorithm is still below 50%, precisely it is at 48%. The reason being that the different traffic

classes are served using separate fixed channel capacity and the different traffic classes have

varying load intensity at every instance, if the new calls have higher load intensity than the handoff

calls at a particular time, the new call blocking probability is increased while there are still

available channels that are unutilized by the handoff call traffic class that cannot be used

overloaded new call traffic class. This results in inefficient and under utilization of the total system

capacity and loss of revenue as shown in figure 4.5 below.

Figure 4.10: Comparison between system

Grade of Service

The system grade of service was computed using its equation and the results plotted against

offered traffic as seen in figure 4.11

graph it shows that at low and moderate traffic, the system offers higher grade of service which

decreases as the offered traffic increases, but

instance, at traffic within 500 users per time the grade of service is still optimum

offered traffic increases the grade of service decreases. At an offered traffic of 1200 users, the

grade service reduces to a minimum of 0.695 and remains constant with increasing traffic. This is

regulated by the DP-CAC admission control

0

10

20

30

40

50

60

70

80

90

100

0 200 400 600

Uti

liza

tio

n

Offered Traffic (calls /second)

Comparison between system capacity utilization (revenue) with DPand without DP-CAC algorithm.

The system grade of service was computed using its equation and the results plotted against

fic as seen in figure 4.11 graph of grade of service against offered traffic. From the

t low and moderate traffic, the system offers higher grade of service which

decreases as the offered traffic increases, but not below an average acceptable threshold.

within 500 users per time the grade of service is still optimum

offered traffic increases the grade of service decreases. At an offered traffic of 1200 users, the

grade service reduces to a minimum of 0.695 and remains constant with increasing traffic. This is

CAC admission control algorithm to maintain the desired QoS guarantee.

600 800 1000 1200 1400 1600 1800

Offered Traffic (calls /second)

108

with DP-CAC algorithm

The system grade of service was computed using its equation and the results plotted against

graph of grade of service against offered traffic. From the

t low and moderate traffic, the system offers higher grade of service which

not below an average acceptable threshold. For

within 500 users per time the grade of service is still optimum at 0.936, as the

offered traffic increases the grade of service decreases. At an offered traffic of 1200 users, the

grade service reduces to a minimum of 0.695 and remains constant with increasing traffic. This is

algorithm to maintain the desired QoS guarantee.

Rev(No DP-CAC)

Rev(DP-CAC)

Figure 4.11 Graph

Figure 4.12 below shows the comparison between the system grade of service with DP

algorithm and without DP-CAC. The results show

below acceptable and agreed upon service requirement (threshold).

that at an offered traffic of 1000 users the grade of service falls to 0.600, and even crashes further

to 0.500 at an offered traffic of 2000 users, this is because DP

to regulate and maintain the desired QoS guarantee.

0.00E+00

1.00E-01

2.00E-01

3.00E-01

4.00E-01

5.00E-01

6.00E-01

7.00E-01

8.00E-01

9.00E-01

1.00E+00

0.00E+00 5.00E+02 1.00E+03

Gra

de

of

Se

rvic

e

Offered Traffic (calls/second)

Graph of grade of service against offered traffic

shows the comparison between the system grade of service with DP

CAC. The results show that without DP-CAC the grade of service falls

below acceptable and agreed upon service requirement (threshold). It can be seen from figure 4.12

that at an offered traffic of 1000 users the grade of service falls to 0.600, and even crashes further

offered traffic of 2000 users, this is because DP-CAC algorithm is not implemented

to regulate and maintain the desired QoS guarantee.

1.00E+03 1.50E+03 2.00E+03 2.50E+03 3.00E+03

Offered Traffic (calls/second)

109

shows the comparison between the system grade of service with DP-CAC

CAC the grade of service falls

It can be seen from figure 4.12

that at an offered traffic of 1000 users the grade of service falls to 0.600, and even crashes further

CAC algorithm is not implemented

GoS(DP-CAC)

Figure 4.12 Comparison between

Queuing Delay

The average delay experienced by the different

value when the system has attained a maximum traffic load condition at increasing traffic intensity

Particularly, the new call non-real time traf

other traffic classes while the handoff traffic classes experience very minimal queuing delay

result from figure 4.13 graph showing queuing

illustrates this. The graph shows that

at low traffic intensity and as the traffic intensity increases it has a delay

NRT experiences a delay of 2.40E

intensity. The new call RT experiences a delay of

increasing traffic intensity. The new call NRT experiences a delay of 1.39E

intensity and 1.42E-11 at increasing traffic intensity.

0.00E+00

1.00E-01

2.00E-01

3.00E-01

4.00E-01

5.00E-01

6.00E-01

7.00E-01

8.00E-01

9.00E-01

1.00E+00

0.00E+00 5.00E+02 1.00E+03

Gra

de

of

Se

rvic

e

Offered Traffic (calls/second)

Comparison between grade of service performance with DP-CACwithout DP-CAC algorithm.

verage delay experienced by the different traffic classes in the network lies within a constant

value when the system has attained a maximum traffic load condition at increasing traffic intensity

real time traffic class experiences greater queuing delay compared to

while the handoff traffic classes experience very minimal queuing delay

graph showing queuing delay against traffic intensity for each traffic class

The graph shows that the handoff RT traffic class experiences a delay of 1.08E

at low traffic intensity and as the traffic intensity increases it has a delay of 1.15E

NRT experiences a delay of 2.40E-12 at low traffic intensity and 2.50E-12 at increasing traffic

intensity. The new call RT experiences a delay of 7.60E-12 at low traffic intensity and 7.80E

increasing traffic intensity. The new call NRT experiences a delay of 1.39E

11 at increasing traffic intensity.

1.00E+03 1.50E+03 2.00E+03 2.50E+03 3.00E+03

Offered Traffic (calls/second)

GoS(DP

GoS(No DP

110

CAC algorithm and

lies within a constant

value when the system has attained a maximum traffic load condition at increasing traffic intensity.

greater queuing delay compared to

while the handoff traffic classes experience very minimal queuing delay. The

for each traffic class

the handoff RT traffic class experiences a delay of 1.08E-12

1.15E-12. The handoff

12 at increasing traffic

12 at low traffic intensity and 7.80E-12 at

increasing traffic intensity. The new call NRT experiences a delay of 1.39E-11 at low traffic

GoS(DP-CAC)

GoS(No DP-CAC)

Figure 4.13 Graph showing q

Call Blocking and Dropping Probability

The call blocking and dropping probabilities were computed using its equation and th

plotted against traffic intensity as seen in figure 4.14

probability against traffic intensity for handoff and new calls respectively.

both the call dropping and blocking probabiliti

intensity, but as the traffic intensity increases there is a rise in both

probability. For instance, at a traffic intensity of 2.25E+03 the handoff

starts to rise with a probability of 9.69E

probability of 1.59E-02 at 3.60E+03. The new call blocking probability follows the same trend

a traffic intensity of 2.25E+03;

with increasing traffic intensity to a probability of 2.00E

the blocking probability of new calls is higher than th

0.00E+00

2.00E-12

4.00E-12

6.00E-12

8.00E-12

1.00E-11

1.20E-11

1.40E-11

1.60E-11

0.00E+00 2.00E+06

De

lay

Traffic Intensity (calls/second)

Graph showing queuing delay against traffic intensity for each traffic class

Blocking and Dropping Probability

The call blocking and dropping probabilities were computed using its equation and th

intensity as seen in figure 4.14 below graph of call blocking

probability against traffic intensity for handoff and new calls respectively. The result shows that

both the call dropping and blocking probabilities are relatively constant at low and moderate traffic

ut as the traffic intensity increases there is a rise in both the dropping and blocking

probability. For instance, at a traffic intensity of 2.25E+03 the handoff call dropping probability

tarts to rise with a probability of 9.69E-04 and steadily rises with increasing traffic intensity to a

02 at 3.60E+03. The new call blocking probability follows the same trend

it starts to rise with a probability of 1.80E-03 and steadily rises

with increasing traffic intensity to a probability of 2.00E-02 at 3.60E+03. It is

the blocking probability of new calls is higher than the dropping probability of handoff calls. This

2.00E+06 4.00E+06 6.00E+06 8.00E+06 1.00E+07

Traffic Intensity (calls/second)

111

for each traffic class.

The call blocking and dropping probabilities were computed using its equation and the results

blocking and dropping

The result shows that

es are relatively constant at low and moderate traffic

dropping and blocking

call dropping probability

04 and steadily rises with increasing traffic intensity to a

02 at 3.60E+03. The new call blocking probability follows the same trend, at

03 and steadily rises

also observed that

e dropping probability of handoff calls. This

Newcall NRT

NewCall RT

Handoff NRT

Handoff RT

112

is because handoff calls have a higher priority than the new calls, and the DP-CAC algorithm is

used as a tool to maintain service continuity for handoff calls while still ensuring fairness for lower

priority traffic class in the system.

Figure 4.14 Graph of call blocking and dropping probability against Traffic Intensity for handoff

and new calls respectively.

Figure 4.15 below shows the call blocking and dropping probability for the four different traffic

classes. The results show that the handoff RT and NRT traffic classes have lower call dropping

probability while the new call RT and NRT traffic classes have higher blocking probability all of

which have been sufficiently minimized with the use of DP-CAC algorithm. From the graph it is

observed that during congestion period the different traffic classes have the following probabilities,

at peak traffic intensity of 3.60E+03 handoff RT has a probability of 1.59E-02, handoff NRT a

probability of 1.69E-02, new call RT a probability of 2.00E-02 and new call NRT a probability of

-4.51E-17

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

0.00E+005.00E+021.00E+031.50E+032.00E+032.50E+033.00E+033.50E+034.00E+03

Blo

ckin

g P

rob

ab

ilit

y

Traffic Intensity (calls/second)

New calls

Handoff calls

2.10E-02. This again shows the priority level

highest priority while new call NRT t

Figure 4.15: Graph of call blocking and dropping probability

Call Blocking and Dropping Probabilities with varying Server Capacity

For further performance evaluation of DP

delivery for WCDMA based 3G

in order to investigate its effect on the call dropping and blocking probability of both handoff and

0.00E+00

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

0.00E+005.00E+021.00E+03

Blo

ckin

g P

rob

ab

ilit

y

Traffic Intensity (calls/second)

. This again shows the priority level of each traffic class; handoff RT traffic class has

highest priority while new call NRT traffic class has least priority

blocking and dropping probability against Traffic Intensityrespective traffic classes.

Blocking and Dropping Probabilities with varying Server Capacity

For further performance evaluation of DP-CAC algorithm on QoS requirements and service

delivery for WCDMA based 3G networks. There is need to vary the general capacity of the node B

in order to investigate its effect on the call dropping and blocking probability of both handoff and

1.00E+031.50E+032.00E+032.50E+033.00E+033.50E+034.00E+03

Traffic Intensity (calls/second)

113

each traffic class; handoff RT traffic class has

Traffic Intensity for the

CAC algorithm on QoS requirements and service

is need to vary the general capacity of the node B

in order to investigate its effect on the call dropping and blocking probability of both handoff and

New calls NRT

New calls RT

Handoff NRT

Handoff RT

114

new calls. The three graphs following, that is, figure 4.16, figure 4.17, and figure 4.18 present the

behaviour of the system and the resultant effect on the call dropping and blocking probability.

Figure 4.16 the graph of call blocking and dropping probability at a server capacity of 24 channels

presents the call blocking and dropping probability of handoff and new calls at a server capacity of

24 channels. The result shows that there is very small difference in both probabilities at low and

moderate traffic, the difference being 1.7E-04 at a traffic intensity of 1.0E+03 because all traffic

classes are admitted into system at low and moderate traffic. At a traffic intensity of 2.25E+03 the

difference between the dropping probability of handoff calls and the blocking probability of new

calls increases to 8.61E-04 and at a traffic intensity of 3.15E+03 the difference between both

probabilities further increases to 3.0E-03. From the analysis of this result, it is observed that the

difference between call blocking and dropping probability increases as the traffic intensity

increases, which again buttresses that the system gives higher priority to handoff calls to ensure

service continuity of users.

Figure 4.16: Graph of call blocking

Figure 4.17 graph of call blocking

presents the call blocking and dropping probability of handoff and new calls at a server capacity of

12 channels. The result shows that there is a significant difference in both probabilities at low and

moderate traffic, the difference being 7.40E

intensity of 2.25E+03 the difference between the dropping probability of handoff calls and the

blocking probability of new calls increases to 1.11

difference between both probabilities further increases to 3.0E

that there is a significant difference between the call dropping and blocking probability of handoff

and new calls at the same traffic

of the reduction in the general channel capacity of the system.

-4.51E-17

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

0.00E+005.00E+021.00E+03

Blo

ckin

g P

rob

ab

ilit

y

locking and dropping probability at a server capacity of 24 channels

locking and dropping probability at a server capacity of 12 channels

presents the call blocking and dropping probability of handoff and new calls at a server capacity of

12 channels. The result shows that there is a significant difference in both probabilities at low and

c, the difference being 7.40E-04 at a traffic intensity of 1.0E+03. At a traffic

intensity of 2.25E+03 the difference between the dropping probability of handoff calls and the

ty of new calls increases to 1.11E-03 and at a traffic intensity of 3.15E+03 the

difference between both probabilities further increases to 3.0E-03. From this analysis it is observed

that there is a significant difference between the call dropping and blocking probability of handoff

and new calls at the same traffic intensity compared to the result in figure 4.16. This is as a result

of the reduction in the general channel capacity of the system.

1.00E+031.50E+032.00E+032.50E+033.00E+033.50E+034.00E+03

Traffic Intensity calls/second

115

server capacity of 24 channels.

server capacity of 12 channels

presents the call blocking and dropping probability of handoff and new calls at a server capacity of

12 channels. The result shows that there is a significant difference in both probabilities at low and

04 at a traffic intensity of 1.0E+03. At a traffic

intensity of 2.25E+03 the difference between the dropping probability of handoff calls and the

ty of 3.15E+03 the

From this analysis it is observed

that there is a significant difference between the call dropping and blocking probability of handoff

. This is as a result

4.00E+03

New calls

Handoff calls

Figure 4.17: Graph of call blocking

Figure 4.18 graph of call blocking

presents the call blocking and dropping probability of handoff and new calls at a server capacity of

6 channels. The result shows that there is a dynamic difference in both probabil

moderate traffic, the difference being 1.64E

intensity of 2.25E+03 the difference between the dropping probability of handoff calls and the

blocking probability of new calls increases to 3

difference between both probabilities further increases to 0.4E

observed that there is a dynamic

0.00E+00

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

0.00E+005.00E+021.00E+03

Blo

ckin

g P

rob

ab

ilit

y

Traffic Intensity calls/second

locking and dropping probability at a server capacity of

locking and dropping probability at a server capacity of 6

presents the call blocking and dropping probability of handoff and new calls at a server capacity of

6 channels. The result shows that there is a dynamic difference in both probabil

moderate traffic, the difference being 1.64E-03 at a traffic intensity of 1.0E+03. At a traffic

intensity of 2.25E+03 the difference between the dropping probability of handoff calls and the

blocking probability of new calls increases to 3.0E-03 and at a traffic intensity of 3.15E+03 the

difference between both probabilities further increases to 0.4E-02. From this analysis it is

dynamic difference between the call dropping and blocking probability of

1.00E+031.50E+032.00E+032.50E+033.00E+033.50E+034.00E+03

Traffic Intensity calls/second

116

server capacity of 12 channels.

probability at a server capacity of 6 channels

presents the call blocking and dropping probability of handoff and new calls at a server capacity of

6 channels. The result shows that there is a dynamic difference in both probabilities at low and

03 at a traffic intensity of 1.0E+03. At a traffic

intensity of 2.25E+03 the difference between the dropping probability of handoff calls and the

03 and at a traffic intensity of 3.15E+03 the

02. From this analysis it is also

difference between the call dropping and blocking probability of

New calls

Handoff calls

handoff and new calls at the same traffic intensity compared to the result in

4.16. This is as a result of the further

Figure 4.18: Graph of call blocking It can be seen from the trend in figure 4.16, figure 4.17 and figure 4.18

capacity, both call dropping and blocking probability

capacity. From the above investigation, it

increases significantly when the channel capacity is reduced, while the dropping probability of

handoff calls increases relatively because it has a higher priority level than the

class.

0.00E+00

5.00E-03

1.00E-02

1.50E-02

2.00E-02

2.50E-02

0.00E+005.00E+021.00E+03

Blo

ckin

g P

rob

ab

ilit

y

d new calls at the same traffic intensity compared to the result in figure 4.17

further reduction in the general channel capacity of the system.

locking and dropping probability at a server capacity of

from the trend in figure 4.16, figure 4.17 and figure 4.18 that by

both call dropping and blocking probability increases generally with decreasing channel

ove investigation, it is observed that the blocking probability of new calls

increases significantly when the channel capacity is reduced, while the dropping probability of

handoff calls increases relatively because it has a higher priority level than the

1.00E+031.50E+032.00E+032.50E+033.00E+033.50E+034.00E+03

Traffic Intensity calls/second

117

figure 4.17 and figure

reduction in the general channel capacity of the system.

server capacity of 6 channels.

that by varying the server

increases generally with decreasing channel

observed that the blocking probability of new calls

increases significantly when the channel capacity is reduced, while the dropping probability of

handoff calls increases relatively because it has a higher priority level than the new call traffic

4.00E+03

New calls

Handoff calls

118

CHAPTER FIVE

CONCLUSION AND RECOMMENDATION

5.0 Conclusion

The objective of this research is on how to maintain service continuity with quality of service

guarantees and to provide service differentiation to mobile user’s traffic profile by efficiently

utilizing system resources. In order to achieve this, DP-CAC algorithm is used and a simulation

model is developed whose aim is to ensure optimal system capacity utilization, maintain agreed

upon grade of service and QoS requirement and also to minimize call dropping and blocking

probability of handoff and new calls respectively.

This work followed the designed methodology step by step and has arrived its ending by

presenting the results and investigations from the research. It is based on these results and

investigations that the following conclusion is stated.

This research work concludes by stating that the design of QoS-aware CAC is a critical issue for

WCDMA based cellular networks supporting heterogeneous traffic. As shown by the results, DP-

CAC algorithm provides an acceptable QoS for each traffic class and prevents the traffic class with

higher priority from overwhelming the traffic class with lower priority in order to enhance fairness

at high traffic conditions. DP-CAC also provides service continuity with QoS guarantees to users

by ensuring that handoff calls are serviced even during congestion period by dynamically

switching the calls to its reserved channel. Further results from investigation show that the call

dropping and blocking probability of handoff and new calls are minimized by DP-CAC algorithm

which generally improves the throughput of the network. Results also show that the call dropping

and blocking probability increases with reduced channel capacity, therefore base station

119

configuration and upgrade for WCDMA based 3G networks require standard channel capacity to

be able to provide QoS requirements and optimum service delivery.

5.1 Contribution to Knowledge

DP-CAC algorithm is often used to evaluate the performance of a system based on the capacity

utilization and grade of service for real time and non real time services even though they are often

classified into handoff and new call traffic classes respectively.

This research work being concerned on how to maintain service continuity with quality of service

guarantees and to provide service differentiation to mobile user’s traffic profile by efficiently

utilizing system resources. It has been able to investigate the behavior of the call dropping and

blocking probability of handoff and new calls with DP-CAC algorithm and with varying general

server capacity. It has also been able to investigate the delay experienced by the different traffic

classes in the system as a contribution to knowledge in the aspect of call admission control for

WCDMA networks. These are areas that are usually mentioned in passing as reviewed in most

literature and papers on call admission control, but this work has been able consider them in detail.

5.2 Recommendation

This research work recommends through its results and findings that future planning or upgrade of

WCDMA based 3G networks should be carried out with standard base station configuration using

improved node B cabinets. This will have a minimum capacity of 24 radio transceiver channels for

optimum quality of service guarantees. Implementing DP-CAC algorithm for its optimal capacity

utilization will generate optimum revenue for mobile network operators. Further research work

may be done on overlay network deployments required for 4G data services. Having these base

stations installed and operated by mobile operators will ensure the right equipment form factor for

the right situation to meet the ever-growing need for greater capacity.

120

REFERENCES

1. Harte Lawrence; “Introduction to Mobile Telephone Systems 1G, 2G, 2.5G, and 3G Wireless

Technologies and Services”. United States : ALTHOS Publishing Inc, (2006).

2. Pierre Lescuyer; “Computer Communications and Networks; UMTS: Origins, Architecture and the Standard”. London : Springer-Verlag, (2004).

3. Gunnar Heine; “GSM Networks:Protocols,Terminology, and Implementation”. London : Artech House, INC., (1999).

4. Gottfried Punz; “Evolution of 3G Networks; The Concept, Architecture and Realization of Mobile Networks Beyond UMTS”. Germany : Springer-Verlag/Wien, (2010).

5. Sharma Pankaj; “Evolution of Mobile Wireless Communication Networks-1G to 5G as well as Future Prospective of Next Generation Communication Network”. International Journal of Computer Science and Mobile Computing (IJCSMC), August 2013 Issue. 8 Vol. 2,, pp. pg.47 – 53.

6. Eberspächer Jörg, et al. “GSM – Architecture, Protocols and Services” Third Edition. United Kingdom : John Wiley & Sons Ltd, (2009).

7. Halonen T., Romero J. & Melero J.; “GSM, GPRS and EDGE Performance; Evolution Towards 3G/UMTS”. England : John Wiley & Sons Ltd, (2003).

8. Andrew Richardson; “WCDMA Design Handbook”. United Kingdom : Cambridge University Press , (2005).

9. Azzedine Boukerche; “Handbook of Algorithms for Wireless Networking and Mobile Computing” Chapman & Hall/CRC Taylor & Francis Group, Boca Raton, (2006).

10. Hanzo L., Blogh J .S., & Dr. Ni S.; “3G, HSPA and FDD versus TDD Networking”, (2008).

11. Perez-Romero J., Oriol S. & Diaz-Guerra M. A.; “Radio Resource Management Strategies in UMTS”, John Wiley and Sons, Inc. (2005).

12. Holma H. & Toskala A.; “WCDMA for UMTS: Radio Access for Third Generation Mobile Communications”, John Wiley and Sons, Ltd, England, (2004).

13. Mohamed H., “Call Admission Control in Wireless Networks: a Comprehensive Survey”, IEEE communications Surveys & Tutorials, , pp. 50-69, First Quarter ( 2005).

14. Liers F., & Mitschele-Thiel A.; “UMTS data capacity improvements employing dynamic RRC timeouts”. In PIMRC, (2005).

121

15. Minoru Etoh, “Next Generation Mobile Systems 3G and Beyond”, John Wiley & Sons Ltd England (2005).

16. A Survey Report on “Generations of Network: 1G, 2G, 3G, 4G, 5G”

17. Joschen Schiller; “Mobile communications” second edition Addison Wesley Pearson Education Limited (2003).

18. Parameswaran R.; “Dynamic resource allocation schemes during handoff for mobile multimedia wireless networks”, IEEE Journal on selected areas in communications, July (1999).

19. Paschos G. S., Poltis I. D. & Kotsopoulos S. A.; “A Quality of Service Negotiation-based Admission Control Scheme for WCDMA Mobile Wireless Multiclass Services”, IEEE Transaction On Vehicular Technology., vol. 54, no. 5, sept. (2005).

20. Gerber Z. M., Mao S. S., & Spatscheck O.; “Tail Optimization Protocol for Cellular Radio Resource Allocation”, F. Qian, Z. Wang, A. In ICNP, (2010).

21. Jianxin Y., Lei X., Chun N. , David C. W. & Yong H. C.; “Resource allocation for end-to-end QoS provisioning in a hybrid wireless WCDMA and wireline IP-based DiffServ network”; European Journal of Operational Research, July (2007) pp 1139–1160.

22. Mooi C. C. & Qinqing Z.; “Design and Performance of 3G Wireless Networks and Wireless LAN”, Springer Science-l-Business Media, Inc (2006).

23. Christophe Chevallier et al.; “WCDMA (UMTS) Deployment Handbook”: Planning and Optimization Aspects John Wiley & Sons Ltd, England (2006).

24. Agrawal D. P., Qing-An Z.; “Introduction to Wireless and Mobile Systems”, 3rd Edition Cengage Learning USA (2006).

25. Mohamed H.; “Call admission control in wireless networks”: a comprehensive survey, IEEE communications Surveys and Tutorials, first quarter (2005).

26. Leong C. W., Zhuang W., Cheng Y., & Wang L.; “Optimal resource allocation and adaptive call admission control for voice/data integrated cellular networks”, IEEE Transactions on Vehicular Technology 55 (2) (2006) 654-669.

27. Ma X. , Liu Y. & Trivedi K. S.; “Modeling performance analysis for soft handoff schemes in CDMA cellular systems”, IEEE Transactions on Vehicular Technology 55 (2) (2006) 670-680.

28. Ghaderi M., & Bautaba R.; “Call admission control for voice/data integration in broadband wireless networks”, IEEE Transactions on Mobile Computing 5 (3) (2006) 193-207.

122

29. Shin S. M., Cho C., & Sung D. K.; ´Interference-based channel assignment for DS-CDMA cellular systems”, IEEE Transactions on Vehicular Technology (1999) 233-239.

30. Liu Z. & Zarki M. E.; “SIR-based call admission control for DS-CDMA cellular systems”, IEEE journal on Selected Area in communications 12 (4) (1994) 638-644.

31. Kuri J. & Mermelstein P.; ´call admission on the uplink of a CDMA system based on the total received power”, in: proceedings of ICC, vol. 3, Vancouver, BC, Canada, (1999) pp. 1431-1436.

32. Fantacci R., Mennuti G. & Tarchi D., “A priority based admission control strategy for WCDMA systems”, IEEE International Conference on Communications (2005).

33. Kuenyoung K. & Youngnam H.; “A call admission control with the thresholds for multi-rate traffic in CDMA systems” in: Vehicular Technology Conference Proceedings, IEEE 51 (2) (2000) pp 830-834.

34. Wang Y., Weidong W., Zhang J. & Ping Z.; “Admission control for multimedia traffic in CDMA systems”, Communication Technology Proceedings, International Conference,9-11 (2) (2003) pp 799-802.

35. Yu O., Saric E., Li A.; adaptive prioritized admission over CDMA, IEEE Wireless Communications and Networking Conference 2 (2005) pp 1260-1265.

36. Yu O., Saric E., Anfei L.; “Fairly adjusted multimode dynamic guard bandwidth admission control over CDMA systems”, IEEE Journal on Selected Areas in communications 24 (3) (2006) pp 579-592.

37. Huan Chen, Kumar S. , Jay C. C.; “Dynamic call admission control scheme for QoS priority handoff in multimedia cellular systems”, IEEE Wireless Communications and Networking Conference (WCNC2002) 1 (2002) 114-118.

38. Huan C., Sunil K. & Jay C. C.; “QoS-aware radio resource management scheme for CDMA cellular networks based on dynamic interference margin (IGM)”, Computer Networks, 46 (2002) 867-879.

39. Salman, A. A. & Mahmoud, A.S.; “Dynamic radio resource allocation for 3G and beyond mobile wireless networks”, Journal of Computer Communications (2006).

40. 3GPP (2007), “Technical Specification TS 25.201, Physical layer – general description”, version 7.3.0 Release 7. Retrieved August 1st, 2007, from http://www.3gpp.org/ftp/Specs/html-info/25201.htm.

41. Holma, H & Toskala, A; “WCDMA for UMTS, Radio Access for Third Generation Mobile Communications”, 4th edition, John Wiley & Sons Ltd, Chichester. (2007) ISBN: 978-0-470-31933-8.

123

42. Holma, H & Toskala, A; “HSDPA / HSUPA for UMTS”, John Wiley & Sons Ltd, Chichester.

(2006), ISBN: 978-0-470-01884-2.

43. Soldiani D., Li M. & Cuny R; “QoS and QoE Management in UMTS Cellular Systems”, John Wiley & Sons Ltd, Chichester. (2006) ISBN: 978-0-470-01639.

44. Li-Hsing Yen; “Global System for Mobile Communication (GSM)”, Dept. of CSIE, Chung Hua University.

45. Sanchita G. and Amit K; “Call Admission Control in Mobile Cellular Networks”, Springer Heidelberg New York (2013).

46. Ashraf T., Sufian Y., & Sattar A; “Performance Evaluation of Quality Metrics for Single and Multi Cell Admission Control with Heterogeneous Traffic in WCDMA Networks”, International Journal of Engineering and Technology Volume 4 No. 1, January, (2014).

47. Ferdinand A. and Jeyakumar M.K.; “Vertical Handoff and Admission Control Strategy in 4G Wireless Network Using Centrality Graph Theory”, Research Journal of Applied Sciences, Engineering and Technology 7(22): May (2014).

48. Barry Stern; “Next-Generation Wireless Network Bandwidth and Capacity Enabled by Heterogeneous and Distributed Networks” © 2012, 2013 Freescale Semiconductor, Inc.

49. Romano F, Giada M & Daniele T, “A Priority Based Admission Control Strategy for WCDMA Systems” IEEE explore June (2005).