34
Performance Evaluation of Base Station Application Optimizer The Interdisciplinary Center, Herzlia Efi Arazi School of Computer Science M.Sc. Dissertation Submitted by Nana Ginzbourg Under the supervision of Dr. Ronit Nossenson Internal Supervision of Dr. Tami Tamir November, 2011.

Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application

Optimizer

The Interdisciplinary Center, Herzlia Efi Arazi School of Computer Science

M.Sc. Dissertation

Submitted by Nana Ginzbourg

Under the supervision of Dr. Ronit Nossenson

Internal Supervision of Dr. Tami Tamir

November, 2011.

Page 2: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

המרכז הבינתחומי בהרצליה

ספר אפי ארזי למדעי המחשב-בית

;בנושא M.Sc. לתואר םדת גמר במסגרת לימודיועב

Performance Evaluation of

Base Station Application Optimizer

גינזבורג ננה312015761

ר רונית נוסנסון"בהנחיית ד

ר תמי תמיר"ד; מנחה פנימית

3122נובמבר

Page 3: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Introduction

2 / 32

Acknowledgments

I would like to deeply thank my advisor, Dr. Ronit Nosseson for her professional

contribution, patience and guidance, without which this project could not have reach its current

status. I would also like to thank Dr. Tami Tamir for valuable and helpful comments.

Page 4: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Introduction

3 / 32

Contents

1. INTRODUCTION........................................................................................................................ 6

2. LTE NETWORK AND BASE STATION APPLICATION OPTIMIZER ............................ 8

3. NS 2 SIMULATOR .................................................................................................................... 10

4. BASE STATION OPTIMIZER SIMULATION IMPLEMENTATION ............................. 12

4.1. NETWORK TOPOLOGY GENERATION ....................................................................................... 13

4.2. TRAFFIC GENERATION ............................................................................................................ 14

4.3. CACHE IMPLEMENTATION ....................................................................................................... 16

4.4. CONFIGURATION ..................................................................................................................... 17

4.5. TRACE ANALYSIS .................................................................................................................... 17

4.6. SIMULATION INSTALLATION AND USAGE ................................................................................ 20

5. PERFORMANCE EVALUATION AND RESULTS ............................................................. 21

5.1. SIMULATION DESCRIPTION AND ESSENTIAL RESULTS ............................................................. 21

5.2. THE IMPACT OF THE CACHE HIT RATE...................................................................................... 25

5.3. THE IMPACT OF THE BUFFER SIZE AT THE BASE STATION ......................................................... 27

6. CONCLUSIONS AND FUTURE WORK ............................................................................... 29

7. REFERENCES ........................................................................................................................... 30

8. APPENDIX A - NS-2 INSTALLATION ON UBUNTU ......................................................... 31

Page 5: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Introduction

4 / 32

Index of Figures

Figure 1 - LTE network architecture ........................................................................................ 8

Figure 2 - LTE network with Base Station Application Optimizer [2].............................. 9

Figure 3 - Basic architecture of NS-2 ..................................................................................... 10

Figure 4 - Mobile data usage broken down by top applications, [8] ............................. 15

Figure 5 - Traffic Flow with /without hit ................................................................................ 16

Figure 6 - NS-2 trace file format .............................................................................................. 18

Figure 7 - Total transferred KB with 20, 50 & 100 concurrent users. ............................ 22

Figure 8 - Data reduction per application type.................................................................... 23

Figure 9 - Data reduction over time ....................................................................................... 23

Page 6: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Introduction

5 / 32

Index of Tables

Table 1 - Data Reduction Factor.............................................................................................. 22

Table 2 - Packet Drop Statistics .............................................................................................. 24

Table 3 - Packet Average Delay ............................................................................................... 24

Table 4 - The Hit Rate Impact on Data Reduction Factor ................................................. 25

Table 6 - The Hit Rate Impact on Packet Average Delay .................................................. 26

Table 5 - The Hit Rate Impact on Drop Packet Percentages ........................................... 26

Table 7 - The Buffer Size Impact on Data Reduction Factor ........................................... 27

Table 9 - The Buffer Size Impact on Packet Average Delay ............................................ 28

Table 8 - The buffer Size Impact on Drop Packet Percentages ...................................... 28

Page 7: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Introduction

6 / 32

1. Introduction

Cellular operators are competing traditional broadband operators by offering mobile broadband

access and IP services such as rich multimedia (e.g., video-on-demand, music download, video

sharing) to laptops, PDAs, smart-phones and other advanced handsets. They offer these services

through access networks such as High-Speed Packet Access (HSPA) and Long-Term Evolution

(LTE)[ 3]. The new technologies offer mobile operators significantly improved data speeds, short

latency and increased capacity.

Traditionally, backhaul lines, connecting the cell sites with the core network, use TDM (E1, T1)

lines, each providing up to 2 Mbps capacity. Though acceptable for voice and low data rate

applications, E1 capacity is inadequate for higher data rates. Obviously, the direct result of the

backhaul bottleneck is low utilization of the radio channels and an unsatisfying user experience.

Enormous backhaul upgrade is required to new technologies such as PON or Microwave to satisfy

the high bandwidth demand. This upgrade is expected to be extremely expensive and its cost casts

a real doubt on the profitability of enhanced network deployment. The biggest cost challenge

facing wireless service providers today is the backhaul network [1]. As a result, the operators are

seeking data reduction solutions integrated with their network upgrades.

The basic idea behind the Base Station Application Optimizer [2] solution is to replace the

traditional Base Station entity with a fast and smart entity, capable of analyzing and optimizing the

user data in the application level. In particular, such unit can use its location in the operator

network to prevent unnecessary data from travelling through the backhaul and core networks. Note

that the suggested optimization is discussed here in the context of LTE networks ("eNode-B") but

it can be performed on any base station or access point (e.g., IEEE 802.16, IEEE 802.11).

The purpose of this project is to implement the Base Station Optimizer solution on an LTE

network in NS-2 simulator and analyze its performance. Performance evaluation results indicate

that the base station application optimizer has a large potential to improve the network

performance by a large data reduction at backhaul and core networks links.

Page 8: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Introduction

7 / 32

In a realistic mixture of applications (based on [4]), we show that under modest assumption on

cache hit rates, the total transferred bytes reduction factor is 1.2-1.8, the average packet delay is

significantly reduced by up to 75% and the packet loss percentages is reduced from 2% to less

than 0.05%. In additional simulations we study the impact of the application hit rates on the

performance parameters, and the impact of the size of the buffers (queues) at the base station on

the performance parameters. As expected, higher cache hit rate led to higher data reduction factor

and gained lower end to end delays. An interesting result was that even low hit rate significantly

reduces packet drops events. The impact of buffer size was proportional to the increase/decrease in

the buffer size value.

Page 9: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer

LTE network and Base Station Application

Optimizer

8 / 32

2. LTE network and Base Station Application Optimizer

In this section a short description of LTE network is provided. For additional information on

LTE network, see [3].

LTE is the next major step in mobile radio communications, and is introduced in 3rd Generation

Partnership Project (3GPP) Release 8. LTE uses Orthogonal Frequency Division Multiplexing

(OFDM) as its radio access technology, together with advanced antenna technologies.

When the evolution of the radio interface started, it soon became clear that the system

architecture would also need to be evolved. Therefore, in addition to LTE, 3GPP is also defining

IP-based, flat network architecture: System Architecture Evolution (SAE) as presented in Figure 1.

The LTE–SAE architecture and concepts have been designed for efficient support of mass-market

usage of any IP-based service. The architecture is based on an evolution of the existing

GSM/WCDMA core network, with simplified operations. In the User Plane (UP), for instance,

there are only two types of nodes (Base Stations and Gateways); while in current hierarchical

networks there are four types (Node B, RNC, SGSN, GGSN). The gateway consists of two logical

UP entities, Serving Gateway (S-GW) and Packet Data Network Gateway (PDN-GW). Flat

architecture with less involved nodes reduces latencies and improves performance.

Figure 1 - LTE network architecture

Page 10: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer

LTE network and Base Station Application

Optimizer

9 / 32

Another simplification is the separation of the Control Plane (CP), with a separate Mobility-

Management Element (MME). A key difference from current networks is that it is defined to

support packet-switched traffic only.

The only node in the Evolved Universal Terrestrial Radio Access (eUTRAN) is the eUTRAN

Node-B (eNode-B, eNB in Figure 1). It is a radio base station that is in control of all radio related

functions in the fixed part of the system. Typically, the eNode-Bs are distributed throughout the

networks' coverage area, each residing near the actual radio antennas. The interface between the

eNode-B and the gateways is the S1-U; the interface between the eNode-B and the MME is the

S1-C. The interface between peers eNode-Bs is the X2. The backhaul links are implementation of

these three interfaces and any required aggregation. To provide optimization capabilities of the

application layer a software package is installed at the base station [2], as presented in Figure 2

below.

The Policy and Charging Resource Function (PCRF) is the network element that is responsible

for Policy and Charging Control (PCC). It makes decisions on how to handle the services in terms

of QoS, and provides information to the PDN-GW, and if applicable also to the S-GW, so that

appropriate bearers and policing can be set up. The Home Subscription Server (HSS) is the

subscription data repository for all permanent user data. It also records the location of the user in

the level of visited network control node, such as MME. The IP Multimedia Sub-system (IMS) is

service machinery that the operator may use to provide services using the Session Initiation

Protocol (SIP).

Figure 2 - LTE network with Base Station Application Optimizer [2]

Page 11: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer NS 2 Simulator

10 / 32

3. NS 2 Simulator

NS 2 [5] is an open-source event-driven simulator designed specifically for research in computer

communication networks. An event-driven simulation is initiated and run by a set of events. A list

of all scheduled events is usually maintained and updated throughout the simulation process. NS 2

provides substantial support for simulations of various protocols (e.g. TCP, UDP, and routing

protocols) over wired and wireless networks. NS 2 provides the ability to fully analyze network

performance by collecting statistics on parameters like average delay and jitter. Due to its

flexibility and modular nature, NS 2 has gained constant popularity in the networking research

community since its birth in 1989. Ever since, several revolutions and revisions have marked the

growing maturity of the tool, thanks to substantial contributions from the players in the field.

Among these are the University of California and Cornell University who developed the REAL

network simulator, the foundation which NS is based on. Since 1995 the Deense Advanced

Research Projects Agency (DARPA) supported development of NS through the Virtual Inter

Network Testbed (VINT) project. Currently the National Science Foundation (NSF) has joined the

ride in development. Last but not the least, the group of researchers and developers in the

community are constantly working to keep NS 2 strong and versatile.

Figure 3 - Basic architecture of NS-2 [5]

Page 12: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer NS 2 Simulator

11 / 32

NS 2 is written in C++ and Object-oriented Tool Command Language (OTcl). The core backend

of the simulator modules are written in C++. The frontend, the configuration and usage of these

modules are written in OTcl. Therefore, simulations, which consist of network topology and traffic

generation, should be written in OTcl. The OTcl sets up simulation by assembling and configuring

the objects as well as scheduling discrete events. The C++ and the OTcl are linked together using

TclCL. Mapped to a C++ object, variables in the OTcl domains are sometimes referred to as

handles. Conceptually, a handle (e.g., n as a Node handle) is just a string in the OTcl domain, and

does not contain any functionality. Instead, the functionality (e.g., receiving a packet) is defined in

the mapped C++ object. In the OTcl domain, a handle acts as a frontend that interacts with users

and other OTcl objects. The simulator supports a class hierarchy in C++, the compiled hierarchy,

and a similar class hierarchy within the OTcl interpreter, the interpreted hierarchy. The two

hierarchies are closely related to each other; from the user’s perspective, there is a one-to-one

correspondence between a class in the interpreted hierarchy and a class in the compiled hierarchy.

The root of this hierarchy is the class TclObject.

Users create new simulator objects through the interpreter; these objects are instantiated within

the interpreter, and are closely mirrored by a corresponding object in the compiled hierarchy. The

interpreted class hierarchy is automatically established through methods defined in the class

TclClass. User instantiated objects are mirrored through methods defined in the class TclObject.

Network topology and traffic generation are written in .tcl simulation script. NS 2 shell

executable command, ./ns, receives as input the ./tcl simulation script and initiates the simulation.

Once run, simulation leaves a text trace file which can be replayed via Network AniMator (NAM)

or used to display graphs via XGraph. Additional trace file contains all statistics in different layers

with 1 millisecond resolution. This trace file should be analyzed in order to retrieve performance

parameters like delay, dropped statistics and transferred data. NS 2 simulator runs best on Linux. It

requires installation and configuration. We have installed NS 2 simulator on Ubuntu 11.04, free

Linux client. Full installation instructions on Windows 7 with Ubuntu Linux client are part of the

contribution of this project and are included in appendix A. The Base Station Optimizer simulation

is described in details in Chapter 4.

Page 13: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Base Station Optimizer Simulation Implementation

12 / 32

4. Base Station Optimizer Simulation Implementation

The targets of this work are to implement the Base Station Optimizer in NS 2 simulator, to run

various scenarios, to analyze the traces and to conclude on the performance of the suggested

optimizer.

Base station application optimizer is integrated with eNode-B in LTE network, therefore the

simulation is based on the LTE/SAE model for NS 2 network simulator [4]. But all the

implementation was done from scratch to enable configurable and flexible model. The code of the

implementation, together with the trace analyzers, is available as an open source at [7]. Future

research on LTE could be easily simulated using this implementation.

The major modifications of the network model include extending the eNode-B to support

different applications such as web, video, file sharing and VoIP; users and traffic distribution

according to application; integrating an optimizer to eNode-B and configuring the end users traffic

according to various hit rates.

The simulation consists of four stages:

1. Network topology definition.

2. Traffic generation.

3. Simulations over different parameters.

4. Trace analysis.

All the parameters are configurable, each functionality is separated in a procedure, therefore this

implementation of LTE network allows performing various simulations in a convenient and

flexible way.

Page 14: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Base Station Optimizer Simulation Implementation

13 / 32

4.1. Network Topology generation

The network topology generation consists of defining the nodes and the links between them.

eNode-B (eNB), Serving Gateway (S-GW), PDN-GW and the server nodes are defined as wired

nodes using DropTail queuing algorithm. The link between eNB and S-GW is the S1 interface.

The nodes are using DropTail but the queue size is defined in fact on the links themselves. The

links in S1 interface have 10Mb bandwidth and 150ms delay.

User equipments (UE) are defined as wireless nodes connected to the eNB using DropTail

queuing algorithm. The links between UE and eNB have 10Mb bandwidth and 10ms delay. The

queue size between the base station and each UE is 10 packets; the oversubscription factor at the

S1 interface is 1/3. The number of end users is set dynamically based on the simulation arguments.

The users are divided to four groups according to their used application, based on the application

usage distribution from mobile trend report by Allot Communication [8]: 26% web usage, 37%

video, 30% files sharing and 7% VoIP. According to Allot Communication [8], 3% of the traffic

is consumed by other applications, but since VoIP traffic is continuously growing we increased the

VoIP traffic to 7% of the global bandwidth.

Since the number of end users is configurable the user distribution is calculated in every

simulation (using utilities.tcl - calculateUserDistribution{} procedure).

The network topology is defined in createTopology.tcl.

Page 15: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Base Station Optimizer Simulation Implementation

14 / 32

4.2. Traffic Generation

The traffic generator enables the creation of a realistic mixture of different traffic types.

We used Allot traffic report [8] for realistic traffic distribution and simulated four types of

different applications (see Figure 4):

1) Web Browsing refers to HTTP traffic associated with website browsing or other HTTP traffic

which is not downloading or streaming. In addition, web browsing also includes apps

delivering real time updates and statistics over HTTP. Web browsing is 26% of the global

mobile bandwidth.

2) Video Streaming refers to communication directed through video sites including either user

generated content (UGC) such as YouTube or content provided by sites such as Hulu. Video

streaming is the world’s most dominant application type with 37% of global mobile

bandwidth.

3) File Sharing refers to HTTP download services, in particular from One-Click hosting sites

such as RapidShare and Megaupload and P2P applications such as Bittorrent and eMule. File

sharing is 30% of the global mobile bandwidth.

4) VoIP & IM (Instant Messaging) – 7% of the global mobile bandwidth. VoIP refers to software

applications that allow users to deliver communication content over IP networks in general and

internet in particular. Internet VoIP is provided by a range of applications such as Skype,

GoogleTalk and also by most popular Instant Messaging applications like Yahoo! Messenger

and Windows Live that allow real-time text-based, video conferencing and voice

communications between two or more users over the internet.

The traffic generation module creates VoIP traffic, using Constant Bit Rate, with small packets

transmitted over UDP. Web-browsing traffic is created using ON/OFF process with Pareto

distribution, with large packets transmitted over TCP. In addition, it creates video and file sharing

traffic using smother Pareto ON/OFF processes with large packets transmitted over TCP.

X is a random variable from a Pareto distribution if:

f(x) = a *ba / xa + 1 for x ≥ b E(X) = b *a / (a − 1) if a > 1

where, a is called the Pareto "shape parameter" (shape_) and b is called the Pareto

"scale parameter". The scale is computed using the packet size and ON / OFF intervals.

Page 16: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Base Station Optimizer Simulation Implementation

15 / 32

Figure 4 - Mobile data usage broken down by top applications, [8]

The simulator models four different traffic generator functions:

1) The VoIP traffic generator is a Constant Bit Rate application of 64 kbps connection based on a

UDP connection with 200 byte packet size.

2) The Web browsing traffic generator is a Pareto application over TCP with packet size of 1040

byte, average burst time (ON period) of 200msec, average idle time (OFF period) of

2000msec, rate of 300 kbps and shape parameter (alpha) of 1.3.

3) The video traffic generator is also a Pareto application with shape parameter of 1.5, ON period

average of 2000msec, with packet size of 1300 bytes, average idle time (OFF period) of

2000msec and rate of 600kbps.

4) The file sharing traffic generator is also a Pareto application with shape parameter of 1.7, ON

period average of 2000msec, with packet size of 1500 bytes, average idle time (OFF period) of

200msec and rate of 100kbps.

Procedures defining Pareto or CBR traffic are defined in createTraffic.tcl and reused in main.tcl

with different parameters. Once the network topology is defined, the end users are allocated with

generated traffic according to the application group they belong to. Each application has its own

flow id for tracing and performance analysis. CBR traffic generator sends packets of defined size

every fixed interval, computed based on the speed and packet sizes.

Pareto traffic generator sends packets of defined size during the ON periods. The interval

between the packets is computed based on Pareto random variable using the shape parameter.

Page 17: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Base Station Optimizer Simulation Implementation

16 / 32

4.3. Cache Implementation

To implement the Base Station Application Optimizer the eNodeB is extended with an

application cache. To measure the performance of the optimizer in total and per traffic type, we

implement separate cache per traffic type. An object hit results in traffic flow between the UE and

the eNodeB. Objects miss result in traffic flow between the UE and the web server (see Figure 5

below). In each application group, the UEs were divided according to configurable hit rate

distribution. The different flows are indicated by flow id per application for tracing and

performance analysis. For example, in Figure 5, a video streaming with hit indicated as flow

number 3. Since there was a hit and the object could be retrieved from the cache, the flow stops at

the Optimized eNodeB. Another video streaming flow goes all the way till the server indicated as

flow number 4. Flow number 6 is a file sharing without hit.

The distribution of the users according to the hit rate is performed in utilities.tcl -

calculateHitRateUsersDistribution{} procedure. The default hit rates are 20% for web browsing

and 40% for video streaming and file sharing. These parameters can be configured either in

configuration.tcl or in the command line. The impact of the hit rate on the performance of the Base

Station Application Optimizer is discussed in Chapter 5.

Figure 5 - Traffic Flow with /without hit

Page 18: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Base Station Optimizer Simulation Implementation

17 / 32

4.4. Configuration

All simulation parameters are defined in configuration.tcl with default values. The following

parameters can be overridden by command line arguments passed to main.tcl – the main

simulation activation script:

number of end users.

link bandwidth, drop tail queue delay and queue size.

a flag whether the e-NodeB should be optimized or not.

application usage distribution and hit rate per application.

4.5. Trace analysis

To evaluate the Base Station Application Optimizer performance we collect the following

statistics:

1) Total data transferred on S1 interface – sum of bytes transferred between on S1 interface

during the simulation. In case of optimized scenario, part of the traffic ends at the eNB

because of cache hits and these packets' sizes don't count. In case of standard scenario all

the traffic goes from end equipments up till the server, therefore all the packets are

transferred via S1 interface. Therefore, by comparing the total sum in both scenarios, we

were able to analyze how much bandwidth our Base Station Application Optimizer saves.

2) Data transferred on S1 interface per second – similar to previous statistic but calculated

per second and not a total sum of all the simulation. This statistic is used to show

bandwidth reduction over the whole duration of the simulation.

3) End to end delay – end to end delay is the time it takes a packet to reach its destination. In

this statistic we measure end to end delay per packet and the average delay of each

application. In a simulation with cache optimization, a packet needs to be transferred

between eNB and end equipment; in a simulation without cache optimization a packets

needs to be transferred between the end equipments and the server.

4) Percentage of dropped packets – this statistics is calculated as number of dropped packets

divided by number of sent packets.

Page 19: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Base Station Optimizer Simulation Implementation

18 / 32

Figure 6 - NS-2 trace file format

NS 2 records and outputs all traffic events statistics in the format which is presented in Figure

6. It does not provide any trace analyzers. Each statistic should be analyzed both per application

type and in total. Since the simulation time was relatively long – 30min, the trace files contained a

lot of data – up to 8G in case of 100 end users. We therefore had to deal with performance

problems and optimize the trace analyzers as much as possible.

We used GNU Awk to implement the trace analyzers that are part of the contribution of this

work [7] and could be used in future research in different applications. Awk automatically applies

a block of commands on all lines in the input file and allows selecting specific records in each

line. This allowed us to define logically complicated analysis based of flow id, packet sequence id,

source nodes and destination nodes.

4.5.1. Total bandwidth analyzer

Total bandwidth analyzer is implemented in totalBandwith.sh shell script that combines Awk.

For each line we sum up packets' size on S1 interface (downlink and uplink) by using source and

destination nodes ids. Using the flow id indicator we separate the results according to different

Page 20: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Base Station Optimizer Simulation Implementation

19 / 32

applications and hit rate scenarios. The output of this script is the total number of bytes per

application passed on S1 interface.

Usage: ./totalBandwith.sh <trace_fileName>

4.5.2. Bandwidth per second analyzer

Bandwidth per second analyzer is implemented in Awk. Array of simulation length in seconds

is created and initialized with 0. Each trace line is analyzed by source node, destination node, flow

id, packet size and time. All the packets' sizes transferred on S1 interface are summed up similar to

total bandwidth analyzer but per second – time is used to indicate the index in the array.

Usage: awk –f bandwithPerSecond.awk <trace_fileName> <endUsers_number>

Please note that endUsers_number is used for the results filename so it is possible to indicate

simple vs. optimized scenario using this argument as well.

4.5.3. End to End delay and Dropped statistics

This analysis is more complicated since specific packets should be followed in the network by

using packet id. An array is allocated for each application type; the packet id indicates the index in

the array of this packet's data. Packet id, flow id, source node, destination node, time and event

type are used.

In case of a sent packet, an enqueue event is created. If according to the source node the packet

was sent from an end user or the server, the time of the event is recorded in the corresponding cell

in a relevant application array as the start time. The counter of send packet per application type is

updated.

In case of a receive event, if according to the source and destination nodes, the packet was

received at the server or by the UE, the time of the event is recorded in the corresponding cell in a

relevant application array as the end time. The counter of received packets per application type is

updated.

In case of a dropped packet event, we check where in the network the packet was dropped. The

number of dropped packets is updated and the most important – the corresponding cell in the

relevant application array is updated with end time = -1 in order not to count dropped packets in

end to end delay time. The counter of dropped packets per application type is updated.

At the end of the run over all lines in trace file, we have four application arrays with the start

and end time of all packets (or -1 in case the packet was dropped). We calculate the overall

Page 21: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Base Station Optimizer Simulation Implementation

20 / 32

average delay time, each packet start time and delay and dropped packets statistics. Percentage of

dropped packets is counted using the two counters per application type we updated during the

analysis – number of dropped packets divided by number of sent packets.

We do not count the dropped packets in the end to end delay statistics.

Usage: Please note that there is a difference between simple and optimized scenarios trace

analysis.

In case of simple scenario:

awk –f endToEndDelay.awk <trace_fileName> <endUsers_number>

In case of optimized scenario:

awk –f endToEndDelayCache.awk <trace_fileName> <endUsers_number>

4.6. Simulation installation and Usage

The Base Station Application Optimizer is hosted at Google Projects [7]. SVN client is required

for downloading the source code. The simulation does not require any installation; the only

prerequisite is validated installation of NS 2.

The root folder, Base-station-application-optimizer, contains the following important folders:

simulation – all the .tcl scripts that define the topology, traffic, configuration and main

activation script

traceAnalyzers – all the trace analysis scripts together with their activation shell scripts

In order to run a simulation:

view simulation/configuration.tcl and check the simulation configurable parameters

run: ns main.tcl

If no arguments were passed, all the parameters will be taken from configuration.tcl.

out.nam and <trace_FileName>.out will be created. <trace_fileName> .out should be used with

trace analyzers. Out.nam can be used in the following way: nam out.nam. This will allow

replaying the simulation graphically.

Page 22: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Performance Evaluation and Results

21 / 32

5. Performance Evaluation and Results

We performed intensive simulations to evaluate the potential improvement of the performance

parameters while using the Base Station Application Optimizer. A description of the simulation

and the essential results are presented in section 5.1. A study of the impact of the cache hit rate on

the performance parameters is given in section 5.2. Finally, the impact of the size of the buffers at

the base station on the performance parameters is elaborated in section 5.3.

5.1. Simulation Description and Essential results

The data reduction on the backhaul link is measured by comparing the sum of the bytes (uplink

and downlink) transferred on optimized S1 interface (between the base station and the gateway)

with the sum of the bytes transferred on standard S1 interface under the same traffic generation.

Other performance parameters, such as average delay and packet loss are measured using the

proper statistics as described in Chapter 4.

Traffic scenarios include 20, 50 and 100 UEs with different traffic QoS classes according to

their used application. The assumed hit rates for the web cache, video and file-sharing are 20%,

40% and 40% respectively. Note that we used constant hit rate regardless the number of users in

the cell. Usually, the hit rate increases as the number of user increases, thus our performance

estimation is conservative. The queue size between the base station and each UE is 10 packets; the

oversubscription factor at the S1 interface is 1/3.

The results regarding the total data reduction are plotted in Table 1 and Figure 7. When 20

concurrent users are connected to the base station, the application optimizer achieves data

reduction of 45%, with 50 users it achieves 37% reduction and with 100 users 19% reduction.

Figure 8 plots the total data reduction per application type for the 20 concurrent users’ scenario.

An interesting observation is that the VoIP traffic that obviously cannot be optimized, actually

presents negative data reduction percentages (data reduction factor < 1) in the concurrent 100

users scenario. Since the queues are emptier and fewer packets are dropped, the CBR application

could actually send more packets successfully. As can be seen from this figure, the data reduction

factor is high, meaning the potential cost reduction is high.

Page 23: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Performance Evaluation and Results

22 / 32

j

Figure 7 - Total transferred KB with 20, 50 & 100 concurrent users.

1

2111111

3111111

4111111

5111111

6111111

7111111

8111111

31 61 211

Optimized

Traditional

Number of users in the cell

Tota

l KB

tra

nsf

erre

d a

t S1

in

terf

ace

Users Application Optimized

KB

Traditional

KB

Reduction Factor

20 Web 152487 181265 1.19

Video 402263 803161 2.00

File Share 249302 499097 2.00

VoIP 43190 43190 1.00

Total 847242 1526713 1.80

50 Web 414002 519829 1.26

Video 1100899 1896456 1.72

File Share 742733 1241815 1.67

VoIP 129570 129566 1.00

Total 2387204 3787667 1.59

100 Web 727680 964349 1.33

Video 2176196 2505661 1.15

File Share 1490109 2029426 1.36

VoIP 302259 294793 0.98

Total 4696243 5794228 1.23

Table 1 - Data Reduction Factor

Page 24: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Performance Evaluation and Results

23 / 32

Figure 8 - Data reduction per application type

Further study of the data reduction over time is presented in Figure 9. As can be seen from this

figure, the data reduction factor is consistent over time.

Figure 9 - Data reduction over time

The results regarding the packet drop are presented in Table 2. It can be seen that when the

network is loaded in the 100 concurrent users’ scenario the number of packet drop is reduced

significantly. As mentioned before, this allows the CBR (that is, VoIP) application to send more

packets successfully. The results regarding the packet average delay are presented in Table 3. As

expected, applications with high hit rate gain lower delays.

1

211111

311111

411111

511111

611111

711111

811111

911111

:11111

Web Video File sharing VoIP

Optimized

Traditional

Application type

KB

tra

nsf

erre

d a

t S1

in

terf

ace

1

2111

3111

4111

5111

6111

7111

8111

9111

1 211 311 411 511 611 711 811 911

TRD 31 UE

OPT 31 UE

TRD 61 UE

OPT 61 UE

TRD 211 UE

OPT 211 UE

Time

KB

PS

tra

nsf

erre

d a

t S1

in

terf

ace

Page 25: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Performance Evaluation and Results

24 / 32

Users Application Traditional Optimized

#Drop %Drop #Drop %Drop

20 Web 0 0.00% 0 0.00%

Video 0 0.00% 0 0.00%

File share 0 0.00% 0 0.00%

VoIP 19 0.05% 0 0.00%

Total 19 0.00% 0 0.00%

50 Web 114 0.07% 109 0.07%

Video 291 0.05% 188 0.02%

File share 249 0.06% 105 0.03%

VoIP 65 0.06% 62 0.06%

Total 719 0.06% 464 0.03%

100 Web 5954 1.96% 165 0.05%

Video 12333 1.57% 603 0.03%

File share 9528 1.50% 374 0.05%

VoIP 12503 4.96% 263 0.10%

Total 40318 2.04% 1405 0.04%

Table 2 - Packet Drop Statistics

Users Application Traditional Optimized Reduction %

20 Web 0.4623 0.3904 15.56%

Video 0.4621 0.1139 75.35%

File Share 0.4622 0.2360 48.93%

VoIP 0.4612 0.4609 0.06%

50 Web 0.4636 0.3688 20.45%

Video 0.4633 0.1440 68.92%

File Share 0.4634 0.2809 39.39%

VoIP 0.4628 0.4617 0.23%

100 Web 0.4713 0.3480 26.16%

Video 0.4710 0.1448 69.26%

File Share 0.4710 0.2832 39.87%

VoIP 0.4691 0.4651 0.86%

Table 3 - Packet Average Delay

Page 26: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Performance Evaluation and Results

25 / 32

5.2. The impact of the cache hit rate

To understand the impact of the cache hit rate on the performance parameters we performed

additional set of simulations with application hit rates higher by 10% and lower by 10%. Table 4

presents the impact of the application hit rate on the data reduction factor. As expected, usually,

higher hit rate value results in higher data reduction factor. However, the increase in the data

reduction factor is more moderate than the increase in the hit rate value.

The impact of the application hit rate on the packet drop statistic when the network is loaded in

the 100 concurrent users’ scenario is presented in Table 5. The results are proportional to the

increase/decrease in the hit rate value. But it is interesting to see that even very low hit rate

significantly reduces (by factor of 12) the packet drops events.

Finally, the impact of the application hit rate on the packet average delay in the 100 concurrent

users’ scenario is demonstrated in Table 6. As expected, applications with high hit rate gain lower

delays.

Users Application

Reduction

Factor Low

HR

Reduction

Factor Med

HR

Reduction

Factor High HR

20 Web 1.21 1.19 1.53

Video 1.60 2.00 1.99

File Share 2.01 2.00 2.01

VoIP 1.00 1.00 1.00

Total 1.61 1.80 1.87

50 Web 1.27 1.26 1.27

Video 1.58 1.72 1.89

File Share 1.68 1.67 1.89

VoIP 1.00 1.00 1.00

Total 1.53 1.59 1.72

100 Web 1.19 1.33 1.27

Video 1.15 1.15 1.27

File Share 1.28 1.36 1.52

VoIP 0.98 0.98 0.98

Total 1.19 1.23 1.33

Table 4 - The Hit Rate Impact on Data Reduction Factor

Page 27: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Performance Evaluation and Results

26 / 32

Application %Reduction

Low HR %Reduction Med HR %Reduction High HR

Web 19.89% 26.16% 25.24%

Video 67.37% 69.26% 73.32%

File Share 36.55% 39.87% 46.57%

VoIP 0.40% 0.86% 0.98%

Table 5 - The Hit Rate Impact on Packet Average Delay

Application %Drop

Traditional

%Drop Low

HR %Drop Med HR %Drop High HR

Web 1.96% 0.26% 0.05% 0.15%

Video 1.57% 0.12% 0.03% 0.05%

File Share 1.50% 0.20% 0.05% 0.12%

VoIP 4.96% 0.46% 0.10% 0.23%

Total 2.04% 0.17% 0.04% 0.09%

Table 6 - The Hit Rate Impact on Drop Packet Percentages

Page 28: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Performance Evaluation and Results

27 / 32

5.3. The impact of the buffer size at the base station

To understand the impact of the buffer size at the base station on the performance parameters we

performed additional set of simulations with buffer larger by 25% and smaller by 25% (with

medium hit rate as described in section A above). We found that the size of the buffer at the base

station has no impact on the data reduction factor (see Table 7). The impact of the buffer size on

the packet drop statistic when the network is moderated loaded in the 50 concurrent users’

scenario is presented in Table 8. The results are proportional to the increase/decrease in the buffer

size value.

Finally, the impact of the buffer size on the packet average delay in the 100 concurrent 100

users’ scenario is demonstrated in Table 9. The results show that keeping medium hit rate and

increasing/decreasing the buffer sizes have similar impact on the delay as increasing/decreasing

the application hit rate (tables 6 and 9).

Users Application Reduction Factor

Small Buffer

Reduction

Factor Med

Bufer

Reduction Factor

Large Buffer

20 Web 1.53 1.19 1.53

Video 1.99 2.00 1.99

File Share 2.01 2.00 2.01

VoIP 1.00 1.00 1.00

Total 1.87 1.80 1.87

50 Web 1.27 1.26 1.27

Video 1.91 1.72 1.90

File Share 1.89 1.67 1.89

VoIP 1.00 1.00 1.00

Total 1.73 1.59 1.72

100 Web 1.26 1.33 1.28

Video 1.30 1.15 1.28

File Share 1.51 1.36 1.57

VoIP 0.98 0.98 0.98

Total 1.33 1.23 1.34

Table 7 - The Buffer Size Impact on Data Reduction Factor

Page 29: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Performance Evaluation and Results

28 / 32

Application %Drop

Small Buffer

TRD

%Drop

Small Buffer

OPT

%Drop

Large Buffer

TRD

%Drop

Large Buffer

OPT

Web 2.17% 0.09% 0.11% 0.05%

Video 1.70% 0.05% 0.16% 0.05%

File Share 1.75% 0.12% 0.24% 0.09%

VoIP 4.09% 0.21% 0.17% 0.16%

Total 2.10% 0.08% 0.18% 0.07%

Table 9 - The buffer Size Impact on Drop Packet Percentages

Application %Reduction

Small Buffer

%Reduction Med

Buffer

%Reduction Large

Buffer

Web 25.31% 26.16% 25.20%

Video 73.48% 69.26% 73.30%

File Share 46.57% 39.87% 46.55%

VoIP 1.08% 0.86% 1.00%

Table 8 - The Buffer Size Impact on Packet Average Delay

Page 30: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Conclusions and future work

29 / 32

6. Conclusions and future work

In this project, a simulation for Base Station Application Optimizer over LTE network was

implemented. Using the implemented simulator various scenarios were run and analyzed – overall

and per application type: data reduction, end to end delays and dropped statistics and more. The

simulator enables a researcher to measure the impact of many configurable parameters in different

scenarios in LTE network. The implementation of the network, simulations and trace analysis are

an open source code available at [7].

In a realistic mixture of application we show that under modest assumption on cache hit rates,

the total transferred bytes reduction factor is 1.2-1.8, the average packet delay is significantly

reduced by up to 75% and the packet loss percentages is reduced from 2% to less than 0.05%. In

additional simulations we study the impact of the application hit rates on the performance

parameters, and the impact of the size of the buffers at the base station on the performance

parameters. As expected, higher cache hit rate led to higher data reduction factor and gained lower

end to end delays. An interesting result was that even low hit rate significantly reduced packet

drops events. The impact of buffer size was proportional to the increase/decrease in the buffer size

value.

The simulation can be further extended to support user mobility. User mobility requires

handling the handover process between different base stations. LTE handovers are not

implemented in NS 2 simulator yet. There are 3 types of handovers in LTE because of existence of

different radio technology networks. When implementing the Base Station Application Optimizer

that supports user mobility two different scenarios should be handled – full deployment of the

optimizer of all base stations in the network or hierarchical structure in case the optimizer is not

installed at some base stations. Once a user moves from eNB to another during optimization

process there are also several options for handling it – close the optimization process or transfer

the optimized data to the destination eNodeB. In order to identify the specific data of moving user,

we need to be able to generate specific objects per application type. This is not supported in

current implementation of Pareto application in NS 2, therefore another traffic generator should be

added to NS 2.

Page 31: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer References

30 / 32

7. References

1. Patrick Donegan, "Backhaul Strategies for Mobile Carriers", In Heavy Reading , Vol. 4

No. 4, 2006.

2. Ronit Nossenson, “Base Station Application Optimizer”, The International Conference on

Data Communication Networking 2010 (DCNET), Athens, Greece.

3. Holma, H., and Toskala, A., LTE for UMTS – OFDMA and SC-FDMA Based Radio

Access, John Wiley & Sons Ltd, United Kingdom, 2009.

4. Qin-long Qiu, Jian Chen, Ling-di Ping, Qi-fei Zhang, Xue-zeng Pan, 2009, LTE/SAE

Model and its Implementation in NS 2, 2009 Fifth International Conference on Mobile Ad-

hoc and Sensor Networks, Fujian, China, pp. 299-303. LTE implementation:

http://code.google.com/p/lte-model/

5. The Network Simulator (NS 2), http://isi.edu/nsnam/ns/

6. "Introduction to Network simulator NS 2" by Teerawat Issariyakul and Ekram Hossain,

2009 Springer Science+Business Media, LLC

7. Project source code and results: http://code.google.com/p/base-station-application-

optimizer/

8. Allot Mobile Trends, Global mobile broadband traffic report, H1/2011.

9. GNU Awk, http://www.gnu.org/s/gawk/manual/gawk.html

Page 32: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Appendix A - NS-2 Installation on Ubuntu

31 / 32

8. Appendix A - NS-2 Installation on Ubuntu

1. Install ubuntu 11.04.

if you are interested in running windows and ubuntu in double boot then you can use the

following installer: http://www.ubuntu.com/download/ubuntu/windows-installer

2. Download and extract ns-2.34 to a folder in your home directory.

NS-2 suitable for Ubuntu 11.04 and these instructions can be downloaded from:

http://sourceforge.net/projects/nsnam/files/ns-2/2.34/

3. Install the development files for X Windows plus the g++ compiler:

sudo apt-get install xorg-dev g++ xgraph

4. In similar way install the following packages: Autoconf, automake, gcc-c++, gcc-4.4 g++-

4.4, libX11-devel, xorg-x11-proto-devel, libXt-devel, libXmu-devel.

Please note that step #4 is recommended but not a must. NS-2 installation will succeed without

these packages but with warnings. I haven't tried running verification of the installation

without installing these packages.

5. Fix the error in the linking of otcl by editing line 6304 of otcl-1.13/configure so that it

reads

SHLIB_LD="gcc -shared"

instead of

SHLIB_LD="ld -shared"

6. Edit the file ns-2.34/tools/ranvar.cc and change the line 219 from

return GammaRandomVariable::GammaRandomVariable(1.0 + alpha_, beta_).value() *

pow (u, 1.0 / alpha_);

to

return GammaRandomVariable(1.0 + alpha_, beta_).value() * pow (u, 1.0 / alpha_);

7. Change the lines 183 and 185 in file ns-2.34/mobile/nakagami.cc to read

resultPower = ErlangRandomVariable(Pr/m, int_m).value();

and

resultPower = GammaRandomVariable(m, Pr/m).value();

8. Change the line 270 in tcl8.4.18/unix/Makefile.in that reads

CC = @CC@

so it appends the version parameter for version 4.4:

CC = @CC@ -V 4.4

Make sure it is a capital V

9. Run ./install from ns-allinone-2.34 top folder

10. Run ./validate from ns-allinone-2.34 top folder

Page 33: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

Performance Evaluation of

Base Station Application Optimizer Appendix A - NS-2 Installation on Ubuntu

32 / 32

תקציר

טכנולוגיות חדשות מאפשרות למפעילים סלולרים להציע למשתמשים שירותים מתקדמים כמו , כיום

באיכות טובה ומהירות גבוהה ישירות למכשירי קצה video-on-demand-ו streaming, שיתוף קבצים

בסיס האבל התשתית המחברת את תחנות Long-Term Evolution (LTE .)מתוחכמים על גבי רשתות כגון

הינה תשתית ( ובסופו של דבר לשרתי התוכן)המשרתות את מכשירי הקצה המתקדמים לליבה של הרשת

מתקדמת יותר ומעבר לשימוש בטכנולוגיות לתשתית ההחלפת .מיושנת אשר מוגבלת מבחינת קצבים ותכולה

מחפשים פתרונות לצמצם את סלולאריים מפעילים , כתוצאה מכך .הינה יקרה מאוד ומסובכת Microwaveכמו

.צורך בשדרוג תשתיתיולהקטין את הכמות המידע המועברת בין תחנות הבסיס לשרתי התוכן

אשר יכול לענות על דרישות ברמת האפליקציה הינו פתרון Base Station Application Optimizer -ה

פתרון זה מציע להחליף את תחנת הבסיס המסורתית באחת מתקדמת אשר מסוגלת לבצע ניתוח . ואל

האופטימזציה יכולה לכלול צמצום משמעותי . ברמת האפליקציה נתוניםבקשות המשתמש ועל ואופטימיזציה

ם כמות החבילות אשר נזרקות וצמצ, הקטנת העיכוב, המועברת בין תחנת הבסיס לשרת םנתונישל כמות ה

רמת שירות ולספק backhaul lines-כתוצאה מעומס ועל ידי כך לפתור את הבעית צוואר הבקבוק הנוצר ב

. טובה יותר למכשירי קצה

תוך שימוש LTEמעל רשת Base Station Application Optimizer-את הבעבודה זו מימשנו

; LTEהפרויקט כולל מימוש של רשת . -Google Projectsבהמימוש הינו קוד פתוח .-NS 2בסימולאטור ה

, VoIPייצור תעבורה של אפליקציות שונות כמו ; ייצור טופולוגיית רשת בהתאם לכמות משתמשי הקצה

האופטימזציה ממודלת על . הרצה של סימולציות וניתוח תוצאות; -video streamingשיתוף קבצים ו, גלישה

. מעל תחנת הבסיס cacheידי

hitתוך שינוי ה -Base Station Application Optimizerהרצנו סימולציות רבות ובחנו את ביצועי ה

rate- ,ה מספר משתמשי הקצה וגדלי-buffer חברת ייצור התעבורה התבסס על דוח של. בתחנות הבסיס Allot

בדיקות שביצענו עלה המ .ברשתות סלולאריות המייצג התפלגות אמיתית של שימוש באפליקציות שונות

אחוז החבילות ; 2.1 – 2.1בפקטור םצומצ( -S1ממשק ה)שמספר הבתים שהועברו בין תחנת הבסיס לשרת

Base Station-ציות מראות שהתוצאות הסימול. 50%-פחת ב עיכוב הממוצעוה 0%...-ל 1%-אשר נזרקו ירד מ

Application Optimizer בעל פוטנציאל גבוה לשיפור ביצועי הרשת והתמודדות עם בעיית הbackhaul

bottleneck- .

Page 34: Performance Evaluation of Base Station Application Optimizer...Performance Evaluation of Base Station Application Optimizer Introduction 2 / 32 Acknowledgments I would like to deeply

ניתוח ביצועים של רכיב אופטימיזציה

בתחנות בסיס

המרכז הבינתחומי בהרצליהספר אפי ארזי למדעי המחשב-בית

מוגש כחיבור מסכם לפרויקט סופי תואר מוסמך

על ידי הסטודנטית ננה גינזבורג

ר רונית נוסנסון "העבודה בוצעה בהנחיית ד

ר תמי תמיר"ד: מנחה פנימית