Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date : February 2019 Public Deliverable
5G-MiEdge Page 1
5G-MiEdge
Millimeter-wave Edge Cloud as an Enabler for 5G Ecosystem
EU Contract No. EUJ-01-2016-723171
Contractual date: M32
Actual date: M32
Authors: See list
Work package: D3.4 User/application centric orchestration of mmWave edge cloud
Security: Public
Nature: Report
Version: 1
Number of pages: 79
Abstract
This deliverable is the final report of Task 3.3. It reports on the latest activities of Work
Package 3 on the user/application centric orchestration to realize 5G liquid edge cloud.
In particular, the deliverable describes algorithms for jointly optimal allocation of radio,
and computation resources, data prefetching, load distribution, and resilient design
against the drawbacks of mmWave communications.
Keywords
Resource allocation, 5G mobile, Multi-Access edge computing, computation offloading,
data prefetching, load distribution, dynamic resource management
All rights reserved.
2
5G-MiEdge Page 2
The document is proprietary of the 5G-MiEdge consortium members. No copy or
distribution, in any form or by any means, is allowed without the prior written agreement
of the owner of the property rights.
This document reflects only the authors’ view. The European Community is not liable for
any use hat may be made of the information contained herein.
Authors
Sapienza University of
Rome
Sergio Barbarossa
Francesca Cuomo
Stefania Sardellitti
Mattia Merluzzi
CEA-LETI Nicola di Pietro [email protected]
Tokyo Institute of
Technology
Gia Khanh Tran
Kei Sakaguchi
Intel Valerio Frascolla
Robert Zaus
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date : February 2019 Public Deliverable
5G-MiEdge Page 3
Table of contents
Abbreviations and acronyms ......................................................................................... 5
Executive Summary ........................................................................................................ 8
1 Introduction ............................................................................................................. 9
2 Jointly optimal allocation of radio/computation/storage resources ................. 10
2.1 Optimal assignment and resource allocation for computation offloading ..... 10
2.1.1 Jointly optimal assignment and resource allocation in static scenarios
............................................................................................................ 10
2.1.2 Jointly optimal resource allocation in dynamic scenarios with power
consumption and average delay trade-off ........................................... 23
2.2 Data prefetching algorithm ............................................................................. 32
2.2.1 Overview ............................................................................................ 32
2.2.2 Performance indices ........................................................................... 34
2.2.3 Numerical results ................................................................................ 35
3 Load distribution and clustering via distributed pricing mechanisms ............ 37
3.1 Load distribution ............................................................................................ 37
3.1.1 State of the art ..................................................................................... 37
3.1.2 Contribution ........................................................................................ 38
3.1.3 Scenario and problem description ...................................................... 38
3.1.4 Single user case .................................................................................. 39
3.1.5 Joint allocation of computation load and radio resources: Multi-user
Case .................................................................................................... 44
3.1.6 Numerical results ................................................................................ 45
3.2 Dynamic ON/OFF strategies .......................................................................... 49
3.2.1 Data traffic demand forecast for load distribution and clustering ...... 50
3.2.2 Optimization problem ......................................................................... 52
3.2.3 Numerical analysis ............................................................................. 53
4 Resilient design and detection of network criticalities ...................................... 56
4.1 Robust design based on multi-link communications and block erasure coding
against blocking .............................................................................................. 56
4.1.1 Overview of the contributions ............................................................ 56
4.1.2 Multi-link communications ................................................................ 57
4
5G-MiEdge Page 4
4.1.3 Block-erasure-correcting codes for robust multi-link communications
............................................................................................................ 58
4.2 Multi-route multiplexing on mmWave mesh backhauling against overloaded
edge cloud ...................................................................................................... 65
4.2.1 System architecture ............................................................................ 65
4.2.2 Optimization problem ......................................................................... 66
4.2.3 Numerical analysis ............................................................................. 68
5 Relevance of the proposed algorithms with the project use cases .................... 70
5.1.1 Omotenashi services ........................................................................... 70
5.1.2 Moving hotspot ................................................................................... 70
5.1.4 Outdoor dynamic crowd ..................................................................... 71
5.1.5 Automated driving .............................................................................. 72
6 Summary ................................................................................................................ 73
7 References .............................................................................................................. 74
5
5G-MiEdge Page 5
Abbreviations and acronyms
Acronym Description
3GPP 3rd Generation Partnership Project
5G 5th (fifth) Generation
5G-MiEdge Millimeter-wave Edge Cloud as an Enabler for 5G Ecosystem
5QI 5G QoS Identifier
AF Application Function
AMF Access and Mobility management Function
ANDSF Access Network Discovery and Selection Function
AP Access Point
API Application Interface
AS Application Server
BS Base Station
BSSID Basic Service Set Identification
CDN Content Delivery Network
C-Plane Control Plane
CPN Connectivity Provider Network
C-RAN Centralized RAN or Cloud RAN
C/U split Control/User-plane split
D2D Device-to-Device
DC Dual Connectivity
D-RAN Distributed RAN
DN Data Network
DP Data Plane
eMBB Enhanced Mobile Broadband
EPC Evolved Packet Core
ETSI European Telecommunications Standards Institute
GBR Guaranteed Bit Rate
GUI Graphic User Interface
HD High Definition
6
5G-MiEdge Page 6
HetNet Heterogeneous Network
HomoNet Homogeneous Network
ICN Information-Centric Networks
IEEE The Institute of Electrical and Electronic Engineers
IoT Internet of Things
LADN Local Area Data Network
LCM Life Cycle Management
LoA Levels of Automation
M2M Machine-to-Machine
MAB Multi-armed Bandit
ME Mobile Edge or Multi-access Edge
ME app Mobile Edge application
MEC Mobile Edge Computing or Multi-access Edge Computing
MEH Mobile Edge Host
MEO Mobile Edge Orchestrator
MEP Mobile Edge Platform
MEPM Mobile Edge Platform Manager
MgNB Master gNodeB
MiEdge mmWave Edge cloud
mmWave Millimeter Wave
MSF MEC Service Function
N3IWF Non-3GPP Interwork Function
NEF Network Exposure Function
NFV Network Functions Virtualization
NR New RAT
NSSAI Network Slice Selection Assistance Information
OBU On-Board Unit
OSS Operations Support System
PCF Policy Control Function
PDU Packet Data Unit
7
5G-MiEdge Page 7
QFI QoS Flow Identifier
QoE Quality of Experience
QoS Quality of Service
RAT Radio Access Technology
RAN Radio Access Network
RL Reinforcement Learning
RNI Radio Network Information
RSS Received Signal Strength
RSU Road Side Unit
sBS Base Station for small cell
SCA Successive Convex Approximation
SDN Software-Defined Network
SgNB Secondary gNodeB
SINR Signal to Interference-plus-Noise Ratio
S-MEH Source ME host
SMF Session Management Function
TA Tracking Area
T-MEH Target ME host
UDM Unified Data Management
UE User Equipment
UOF User plane Optimization Function
UPF User Plane Function
U-Plane User Plane
uRLLC Ultra-Reliable & Low Latency Communications
V2V Vehicle-to-Vehicle
V2X Vehicle-to-Everything
VM Virtual Machine
WP Work Package
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 8
Executive Summary
The project 5G-Miedge aims to merge Multi-Access edge computing and millimeter-
wave (mmWave) communications to enable the 5G ecosystem. Work Package 3 (WP3)
is one of the technical WP and focuses on the design of 5G liquid edge cloud for
user/application centric orchestration. Among all tasks of WP3, this deliverable reports
the results related to task 3.3: “User/application centric orchestration to realize 5G
liquid edge cloud”. The objective is to develop new algorithms for resource allocation
and orchestration. In particular, Task 3.3 is divided in three subtasks dealing with
- joint allocation radio/computation/storage resources,
- load distribution and clustering,
- resilient design of mobile edge computing.
The deliverable is organized in 6 sections and describes algorithms for the following
objectives:
a. Resource allocation for computation offloading
b. Data prefetching
c. Computational load distribution among MEHs
d. Dynamic ON/OFF strategies
e. Robust design analysis of mobile edge computing over mmWave links
f. Multi-route multiplexing on mmWave mesh backhauling against overloaded
edge cloud.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 9
1 Introduction
The goal of the EU-Japan funded project 5G-Miedge (Millimeter-wave Edge cloud as
an enabler for 5G ecosystem) is to create a synergy between mmWave communication
(Radio Access Network) and Multi-Access Edge Computing. In this holistic view,
radio, computation and storage resources have to be managed jointly, in order to
provide a good experience to the end users, especially in terms of latency and energy
efficiency. This deliverable is part of WP3, which runs from month 4 until month 32.
In particular, it is related to task 3.3 of the project, which focuses on the
user/application centric orchestration to realize 5G liquid edge cloud. More
specifically, this deliverable elaborates on the development of different novel
algorithms regarding the orchestration of the edge cloud resources, namely Radio
Access Points and Mobile Edge Hosts.
This deliverable is split in 6 Sections, and describes algorithms for the following
objectives:
a. Resource allocation for computation offloading
b. Data prefetching
c. Computational load distribution among MEHs
d. Dynamic ON/OFF strategies
e. Robust design analysis of mobile edge computing over mmWave links
f. Multi-route multiplexing on mmWave mesh backhauling against overloaded
edge cloud
In Section 2 new algorithms for the joint radio and computation resource allocation
strategies for computation offloading, and a novel data prefetching algorithm are
presented.
Section 3 presents the problem of load distribution, clustering, and describes dynamic
ON/OFF strategies for the energy efficiency of the edge cloud.
In Section 4, we first present a block erasure channel coding analysis as a way to
counteract blocking events typical of mmWave links. Then we deal with a multi-route
multiplexing strategy to avoid overloaded nodes in the edge cloud while reducing the
energy consumption. Every section and its relative algorithms will be corroborated
with several numerical results to show their performance.
In Section 5 a mapping between the proposed algorithms and the 5G-MiEdge project
use cases is proposed.
Finally, in Section 6, we will draw the deliverable conclusions.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 10
2 Jointly optimal allocation of radio/computation/storage resources
In this section we present some recent results on an optimal joint allocation of radio,
computation and storage resources. First, computation offloading strategies are
presented in a static scenario, with a novel assignment algorithm to associate user
equipment (UE) to millimeter wave (mmWave) access points (APs) and mobile edge
hosts (MEHs). Then results are extended to a dynamic case, devising an algorithm
based on stochastic optimization. Finally, a novel data prefetching algorithm is
presented.
2.1 Optimal assignment and resource allocation for computation offloading
We now describe resource allocation strategies for computation offloading.
Computation offloading is one of the services enabled by MEC, as detailed described
in [MEC002], enabling resource poor devices to run sophisticated applications by
transferring the computation from UEs to MEHs. Given the edge cloud scenario
depicted in Fig. 2.1, showing a set of UEs, APs and MEHs. We call ‘MiEdge resources’
two main set of items, i.e. communication resources (transmit power and data rate from
UEs to mmWave APs), and computation resources on MEHs. The MEHs are operating
In multi-tasking mode, running a set of virtual machines (VM) serving the applications
offloaded from the UEs. In the following, the computation resources will be measured
in percentage of CPU cycles dedicated from an MEH to a specific UE. In this
subsection, we describe the optimal assignment of those resources to APs and MEHs,
in terms of UE power consumption, and we devise low complexity algorithms for real
time applications, in static and dynamic scenarios. We start in Section 2.1.1 with the
static optimization, and then we generalize the approach to the dynamic case, which
includes scheduling, in Section 2.1.2.
Fig. 2.1 - Edge cloud scenario
2.1.1 Jointly optimal assignment and resource allocation in static scenarios
In this section, we present a novel algorithm for joint assignment of UEs to mmWave
APs and MEHs to run a certain application, together with a joint optimization of radio
and computation resources with the aim of minimizing the UE power consumption
under latency constraints. We first present a brief state of the art and then our novel
approach. These results are mainly based on [Sar18].
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 11
State of the art
Several works investigated computation offloading optimization strategies in Multi-
Access Edge Computing (MEC) systems in the multi-user case [Sar15], [You17],
[Zhao17], [Chen16]. In [Sar15], a joint optimization of radio and computation
resources is investigated, in a multi-user MIMO scenario, taking into account inter-
cell interference. In [You17], the authors aim to minimize the overall energy
consumption at the UE side, in case of TDMA and OFDMA systems, while the authors
of [Zhao2017] propose a joint optimization of offloading decision and allocation of
computation and communication resources. In [Chen16], MEC computation
offloading decision is formulated as a computation offloading game. Only few works
focus on the association of users to APs and MEC servers. In [Sar14], we propose a
sub-optimal association strategy minimizing the UE energy consumption, taking into
account radio and computation parameters jointly. The server selection problem is
studied in [Zhao15] for a multiuser system to decide whether to offload computation
either to the edge server or to the central cloud. In [Ge12], the server selection over
multiple MEC servers is formulated as a congestion game. Another approach for the
Cloud Radio Access Networks (C-RAN) is presented in [Li2017], based on matching
theory.
Contributions
We consider the mmWave edge cloud scenario, composed of multiple APs and
multiple MEHs concurring to serve multiple UEs, as depicted in Fig. 2.1. The
association of a UE to a pair of AP and MEC server depends not only on radio channel
parameters, but also on the availability of computational resources at the MEC server
and the state of the backhaul network. A UE can get radio access from a certain AP,
but its application can run on a MEC server located elsewhere, exploiting wired or
wireless backhaul. We formulate the offloading problem as the jointly optimal
association between UEs, APs and MEHs, and allocation of mobile radio and
computational resources. To solve the resulting mixed-binary problem (as described in
2.1.1.4) with affordable complexity, we propose two alternative sub-optimal strategies:
i) a method based on successive convex approximation (SCA) techniques, as
developed in [Scu17], which extends our previous approach [Sar14] by incorporating
the penalty method recently proposed in [Zhang17];
ii) a method based on matching theory [Roth92], extending the approach of [Saad14]
to deal with computation offloading.
Scenario description and notation
Let us consider a mmWave based cloud access network where multiple users may get
radio access through multiple APs and multiple MEHs. In particular, we consider a
system composed of 𝑁𝑏 mmWave APs, 𝑁𝑐 MEH and 𝐾 mobile users. Denote with
ℐ ≜ {𝑘 ∶ , 𝑘 = 1, … 𝐾} the set of users asking for computation offloading of their
applications to a set of MEHs. From the offloading point of view, we simplify the
classification of applications by assuming that each of them is characterized through
the following parameters: i) the number of bits (𝑏𝑘) to be transmitted from the mobile
user to the MEH to transfer the program execution; ii) the number of CPU cycles 𝜔𝑘
needed to run the application. We denote by 𝐿𝑘 the end-to-end latency requested by
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 12
the 𝑘-th UE to run its application. In case of offloading, the overall latency experienced
by the 𝑘-th UE for accessing the network through the AP 𝑛 when served by MEH 𝑚,
is given by
The first term is the server execution time
𝑇𝑚𝑘exe =
𝑤𝑘
𝑓𝑚𝑘 (1)
where 𝑤𝑘 is the number of CPU cycles to be executed and 𝑓𝑚𝑘 is the number of CPU
cycles/second allocated by the 𝑚-th MEH to the 𝑘-th UE. The second term 𝑇𝑘𝑛tx is the
time spent to send the program state and input (encoded with 𝑏𝑘 bits) from the 𝑘-th
UE to the 𝑛-th AP. The third term 𝑇𝑘𝑛rx is the time necessary for the server to send the
result back to the 𝑘-th UE. Finally, the fourth term 𝑇𝐵𝑛𝑚 is the backhaul delay between
AP 𝑛 and MEC server 𝑚, which is supposed to be constant regardless the size of the
application; This delay enables the transfer of the program execution from the UE to
the MEC server. More specifically, the time 𝑇𝑘𝑛tx necessary for the 𝑘-th UE to transmit
𝑏𝑘 bits over a channel of bandwidth 𝐵 to the 𝑛-th AP is
where 𝑐𝑘 = 𝑏𝑘/𝐵 and 𝑟𝑘𝑛(p𝑘𝑛) is the spectral efficiency, which, in the interference-
free regime, assumes the form
where p𝑘𝑛 is the transmit power of user 𝑘 and 𝛼𝑘𝑛 is an equivalent channel coefficient.
We assume mmWave communications for the radio access and, under Line Of Sight
(LOS) conditions, we use Friis formula to model the path loss. Each pair of UE and
AP is supposed to be equipped with, respectively, 𝑛𝑇 transmit antennas and 𝑛𝑅 receive
antennas. We also denote with 𝑑𝑘𝑛 the distance between UE 𝑘 and AP 𝑛. In a LOS
condition with a single path with isotropic array elements, the channel matrix 𝑯𝑘𝑛 ∈ ℂ𝑛𝑅×𝑛𝑇 between UE 𝑘 and AP 𝑛 is rank one. In this case, the channel coefficient 𝛼𝑘𝑛
is 𝛼𝑘𝑛 = 𝜈𝑘𝑛2 𝜉𝑘𝑛/𝜎𝑛
2, with 𝜉𝑘𝑛 the positive eigenvalue of the rank one matrix 𝑯𝑘𝑛𝑯𝑘𝑛𝐻 ;
𝜎𝑛2 is the noise variance; and the coefficient 𝜈𝑘𝑛 incorporates the path loss. Within this
edge-cloud scenario, the association of a UE to a pair of AP and MEH depends not
only on the radio channel parameters, but also on the computation resources
availability of the MEHs. Therefore, by extending our previous approach in [Sar14],
in the next section we propose an optimization strategy to jointly find the optimal
computation and communication resources allocation and the optimal association
between UEs, APs and MEHs.
Algorithm development
Our goal is now to devise an optimal strategy to assign each UE to an AP and to a
MEH, while jointly optimizing the radio and computation resources allocation. The
objective is to minimize the transmit power consumption of all users, under power
budget and latency constraints. The assignment is performed by properly selecting the
binary values 𝑎𝑘𝑛𝑚 for 𝑘 = 1, … , 𝐾 , 𝑛 = 1, … , 𝑁𝑏 , 𝑚 = 1, … , 𝑁𝑐 , where the
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 13
subscripts 𝑘, 𝑛, and 𝑚 denote, respectively, UE, AP, and MEH indexes. For the sake
of simplicity, we assume that each UE is served by a single AP and a single MEH.
Therefore, for each 𝑘, 𝑎𝑘𝑛𝑚 = 1 if the 𝑘-th UE accesses the network through AP 𝑛
and it is served by the 𝑚-th MEH, while 𝑎𝑘𝑛𝑚 = 0 otherwise. The objective function
we aim to minimize is the sum of the powers spent by all UEs:
where
The resulting optimization problem is:
where we define the function
.
The above constraints have the following meaning: i) the overall latency of the 𝑘-th
must be lower than the maximum value 𝐿𝑘; ii) the total power spent by the 𝑘-th UE
must be lower than a fixed total power budget 𝑃𝑘; iii) the sum of the computational
rates 𝑓𝑚𝑘 assigned by each MEH cannot exceed the server computational capability
𝐹𝑚 ; iv) each UE should be served by one AP-MEH pair. For simplicity we have
incorporated the term 𝑇𝑘𝑛rx in the latency limit 𝐿𝑘 . It can be noted from the latency
expression the interplay between radio access and computational aspects, such
relationship calls for a joint optimization of the radio resources, the transmit power 𝒑
of the UEs and the computational rates 𝒇. Unfortunately, problem 𝒫 is a mixed-binary
problem and, in general, NP-hard. To handle its computational cost with affordable
complexity, in the following we propose two alternative suboptimal strategies.
SCA-based optimization strategy
In this section we propose a suboptimal optimization strategy to solve problem 𝒫 ,
combining our previous approach in [Sar14] with the SCA strategy proposed in
[Scu17], and incorporating an efficient penalty term, recently proposed in [Zhang17],
to relax the binary variables to be real while driving the solution towards the situation
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 14
where each UE is served by a single AP and a single MEH. More specifically, the
penalty method in [Zhang17] is based on the fact that, given the following problem,
with 𝑞 ∈ (0,1), 𝜖 > 0, the optimal solution is binary, i.e. only one element is one and
all the others are zero. Moreover, the minimum value of the objective function is
Therefore, we relax our binary variables 𝑎𝑘𝑛𝑚 to be real and belonging to the
following convex set
and we add a penalty to the objective function so that our relaxed optimization problem
becomes
where 𝜎 > 0 is a penalty parameter and
𝒫 becomes 𝒫𝜎 by introducing the penalty function. However, even by relaxing the
binary variables 𝒂, problem 𝒫𝜎 is still non-convex, since the objective function and
the constraints i), ii) are non convex. In what follows, we exploit the structure of
problem 𝒫𝜎 and building on some recent advances on SCA techniques [Scu17], we
devise an efficient iterative penalty SCA approximation algorithm (PSCA) converging
to a local minimum. To solve the non-convex problem 𝒫𝜎 efficiently, we adopt an
SCA-based algorithm where the original problem is replaced by a sequence of strongly
convex problems. To do this, we start by finding a suitable convex approximation of
the nonconvex objective function that is the sum of the non-convex term 𝑓(𝒑, 𝒂) and
the concave function 𝑃𝜖(𝒂) . Let 𝒙 ≜ (𝒑, 𝒇, 𝒂) be the set of variables and 𝒳 the
feasible set of problem 𝒫𝜎 . Moreover, we denote by 𝒙𝜈 ≜ (𝒑𝜈 , 𝒇𝜈 , 𝒂𝜈) the set of
variables at iteration 𝜈 of SCA. Following [Scu17], the main idea is to approximate
the original nonconvex non-separable term with a strongly convex function, say
𝑓𝑃𝜎(𝐱, 𝐱𝜈), that has the same first order behaviour of the original objective function
at 𝐱𝜈, around the current iterate 𝒙𝜈 ∈ 𝒳. To find a convex approximant of the objective
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 15
function observe that 𝑓(𝒑, 𝒂)has a bilinear structure, since it is the sum of the terms
𝑠𝑘𝑛𝑚(𝑝𝑘𝑛,𝑎𝑘𝑛𝑚) ≜ 𝑝𝒌𝒏𝑎𝑘𝑛𝑚. Therefore, as suggested in [Scu17], 𝑠𝑘𝑛𝑚 can be written
as a difference of convex (DC)
A valid convex upper approximation of𝑠𝑘𝑛𝑚, for any given (𝑝𝑘𝑛,𝜈 𝑎𝑘𝑛𝑚
𝜈 ) ∈ ℝ2, is then
Finally, the concave function 𝑃𝜖(𝒂) can be approximated by its first order
approximation at the iterate 𝒂𝜈, i.e.,
Then, a convex approximation of 𝑓𝑃𝜎(𝐩, 𝐚) can be defined as
where we added quadratic regularization terms to make 𝑓𝑃𝜎(𝐱, 𝐱𝜈), strongly convex
with respect to 𝒙 . Note that, in the above approximation, we use a monotonically
increasing penalty sequence {𝜎𝜈}𝜈 to guarantee that the obtained solution 𝒂 is binary
[ZHANG17]. Now, we show how to reduce the non-convex constraint
𝑔𝑘𝑛𝑚(𝑝𝑘𝑛𝑚, 𝑓𝑚𝑘 , 𝑎𝑘𝑛𝑚) to a convex form. To do so, we can observe that at any feasible
point (p,f,a), 𝑟𝑘𝑛(𝑝𝑘𝑛) > 0 , 𝑓𝑚𝑘 > 0 and 𝐿𝑘 > 𝑇𝐵𝑛𝑚 − 𝜔𝑘𝑎𝑘𝑛𝑚𝑓𝑚𝑘 , for all 𝑘, 𝑛, 𝑚 .
Then, the constraint 𝑔𝑘𝑛𝑚(𝑝𝑘𝑛𝑚, 𝑓𝑚𝑘 , 𝑎𝑘𝑛𝑚) can be rewritten as
which is the sum of the convex term −𝑟𝑘𝑛(𝑝𝑘𝑛) and the convex function
Finally, the non-convex bilinear constraint ℎ𝑚(𝒇, 𝒂) can be replaced by the following
convex approximation
We can now introduce the proposed convex approximation of the nonconvex problem
𝒫𝜎. Given the feasible point 𝒙𝜈 ∈ 𝒳, we have
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 16
where we denoted by �̂�(𝒙𝝂) ≜ (�̂�(𝒙𝝂), �̂�(𝒙𝝂), �̂�(𝒙𝝂)) the unique solution of the
strongly convex optimization problem 𝒫𝜈 , starting from a feasible point 𝒙0 . The
proposed solution method consists in solving iteratively problem 𝒫𝜈, starting from a
feasible point 𝒙0. First we find an optimal solution �̂� of 𝒫𝜈 by setting the penalty
coefficient 𝜎 to zero. Hence, taking this optimal solution as initial point, we iteratively
solve 𝒫𝜈 with an increasing penalty coefficient 𝜎𝜈 . In Algorithm 1, we provide a
formal description of the procedure.
Matching theory based optimization strategy
In this section, we propose an alternative approach to overcome the combinatorial
complexity of the assignment problem by devising an optimization strategy based on
matching theory [Roth92]. Inspired by [Saad14], which uses matching theory for the
uplink selection of AP, we generalize the approach of [Saad2014] to computation
offloading. The assignment problem is formulated as a matching game in which UEs
and AP-MEH pairs rank one another using suitable preference functions associated to
the transmit power used by each UE, to implement computation offloading under
latency constraints. Matching theory is a powerful and simple tool to associate agents
of two different sets using suitable preference lists. A typical matching problem is the
college admission problem [Gale62], where students apply to colleges based on their
preference lists and are accepted based on colleges' preference lists. Each college
cannot accept more students than a certain number, defined as its quota 𝑞. The aim of
matching theory algorithms is to find a stable assignment. An assignment of applicants
to colleges is called unstable if there are two applicants 𝛼 and 𝛽 who are assigned to
colleges 𝐴 and 𝐵 , respectively, although 𝛽 prefers 𝐴 to 𝐵 and 𝐴 prefers 𝛽 to 𝛼 .
Matching theory has been extensively used in economics, and recently introduced in
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 17
wireless networks [Han17]. In the context of C-RAN, the authors in [Li17] find an
assignment of UEs to Radio Remote Head (RRH), Base Band Unit (BBU) and
computing resources to minimize the refusal ratio, i.e. the portion of requests that
cannot meet their deadlines. The preference function is based on the expected latency
that a user would experience choosing a certain triple RRH, BBU, and computing
resource. In [Gale62], the Deferred Acceptance (DA) algorithm is presented and
proved to converge to a stable matching. In the DA algorithm, students apply to their
preferred college, which subsequently accept students based on their preference lists,
rejecting the least preferred ones. Applying matching theory, and in particular the DA
algorithm to problem 𝒫 is not straightforward due to inter-dependencies of utility
functions necessary to build preference lists. In fact, while UEs get accepted by a pair
AP-MEH, the convenience of being assigned to that pair changes due to the need for
resource sharing. As pointed out in [Saad14], in case of interdependent preferences,
the general college admission game becomes complex. In [Saad14], matching is used
only for the uplink selection of AP, and the 𝑅-factor, a parameter that incorporates both
the delay and the packet success rate, is used as utility function. To overcome the
problem of interdependent preferences, the authors propose to divide the problem into
two interdependent subgames: a matching game, where UEs build their preferences
based on the potential 𝑅-factor guarantees (supposing that each AP 𝑛 fills up its quota
𝑞𝑛), and a second subgame, where UEs can request to be transferred to another AP to
improve their 𝑅-factors. Generalizing this approach to our assignment problem, we
first need to define a utility function to build UEs' preference lists. In our joint
allocation of communication and computation resources, we incorporate both
communication and computation parameters in the preference function. For the sake
of simplicity, we assume perfect beamforming and interference-free channels. In
particular, every UE is supposed to be served with the same frequency band at the
same time. We focus instead on the delay caused by computation resource sharing. To
define the utility function, we consider constraint i) of problem 𝒫. Even though we do
not have any a priori information on allocated resources, we can get an approximate
estimation of the minimum transmit power that a user would experience choosing a
certain pair AP-MEH using the delay constraint. To do this, we compute an expected
minimum transmit power in case of a disjoint allocation. In particular, given a certain
allocation of computation resources, the minimum transmit power necessary to meet
the latency constraint can be easily found. As we do not know a priori the assignment
of UEs to each pair AP-MEH, initially we assume that each MEH 𝑚 serves all UEs as
far it does not exceed its quota 𝑞𝑚, in order to consider the maximum computation
delay. Thus, for the first assignment, we compute 𝑓𝑚𝑘, for UE 𝑘 and MEH 𝑚, with a
proportional rule as follows
Replacing the above equation in the execution delay expression given in (1) the
minimum rate to meet the latency constraint 𝐿𝑘 can be written as
Inverting the above equation, the associated minimum transmit power is then
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 18
We define the utility function for UE 𝑘 accessing AP 𝑛 and MEH 𝑚 as
Based on this utility function, each UE builds its preference list. Similarly, each AP
build its preference list based on the best SNR. For simplicity, we assume that all
MEHs can accept an unlimited number of UEs. However, this condition can lead to a
solution very far from the optimum, since a single MEH has limited resources. Indeed,
a first stage for the assignment is not sufficient due to interdependency of the
preference functions of all UEs. For this reason, as in [Saad14], we perform a second
stage with a coalitional game to transfer UEs, given the new conditions, to a more
desirable coalition. A coalition 𝒞𝑛𝑚 is the set of all users associated to AP 𝑛 and MEH
𝑚 . Once UEs are assigned with the DA algorithm, the new proportional disjoint
allocation of computation resources can be computed as follows:
Computing the new approximate computation delays, we can compute the expected
minimum transmit powers towards all links and build the new preference lists. Now,
UEs can request to be transferred from one coalition to another one, based on the new
utility functions. In particular, as in [Saad14], UE 𝑘 requests to be transferred to
coalition 𝒞𝑛′𝑚′ from coalition 𝒞𝑛𝑚 if 𝑈𝑘𝑛′𝑚′ > 𝑈𝑘𝑛𝑚. If more UEs request to be
transferred to a certain coalition, only the UE with the highest SNR is considered for
the transfer. Each transfer is accepted if the following two conditions hold [Saad14]:
1. MEH 𝑚′ does not exceed its quota 𝑞𝑚′
2. The social welfare, represented by the sum of the utility functions of the two
coalitions is improved.
Formally, the second condition can be written as follows:
where
and 𝒞𝑛𝑚\{𝑘} is the set obtained by removing UE 𝑘 from 𝒞𝑛𝑚. This stage stops if there
are no more transfer requests or the social welfare is not improved by any transfer. In
[Saad14] it is proved that, given any initial assignment, this second game will converge
to a Nash-stable partition, where no user has any incentive to execute a transfer. Once
the assignment has been performed, for every MEH, we optimize the radio and
computation resources jointly as in 𝒫, but considering the assignment as given by the
matching algorithm. Note that, the difference between PSCA and the matching
algorithm, is that the PSCA performs the assignment and the joint allocation at the
same time, while the matching first performs the assignment, and then the resources
are jointly allocated.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 19
Numerical results
To test the effectiveness of the proposed offloading strategy, in Fig. 2.1.1-1 we report
the optimal total transmit power consumption vs. the maximum latency 𝐿𝑘, assumed
equal for all users. To test the effectiveness of the proposed algorithms, we compare
their performance with the optimal results achieved with the exhaustive search. We
consider a network composed of 𝐾 = 4 UEs, a number of APs equal to the number of
MEH, i.e. 𝑁𝑏 = 𝑁𝑐 = 2. The other parameters are set as follows: 𝐹1 = 2.7 ⋅ 109, 𝐹2 =3 ⋅ 108 , 𝑃𝑘 = 1.35 ⋅ 10−1, 𝑞 = 0.7 , where 𝑞 is the order of the norm used in the
penalty function. We can observe that both the PSCA and the matching game
algorithms provide results very close to the exhaustive search algorithm whose
complexity is exponential. Additionally, we consider as comparison term the SNR-
based association method, in both cases where the radio and computational resources
are jointly and disjointly optimized. It can be noted that both proposed approaches
yield considerable power savings with respect to SNR-based methods, taking
advantage of the optimal assignment of each user to a cloud through the most
convenient base station. It has to be remarked that the complexity of the matching-
based algorithm has a polynomial growth with the number of players (UEs and AP-
MEH pairs), although the reached final solution could be suboptimal, as the preference
lists are built based on an approximate a priori knowledge. To further test the
effectiveness of the matching algorithm, in Fig. 2.1.1-2 we show the ratio 𝜌 between
the overall power consumptions achieved with two different association rules (SNR
and matching) and the global optimal solution (exhaustive search), averaged over the
channel realizations. It is interesting to note from Fig. 2.1.1-2 that 𝜌 keeps quite close
to 1 for the proposed matching algorithm.
Fig. 2.1.1-1. – Overall UE transmit power vs. L
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 20
Fig. 2.1.1-2 – Average ratio 𝝆 vs. L
Use case specific system architecture and signaling
For the scenario studied in the previous sections we assume a system architecture as
shown in Fig. 2.1.1-3 below. This is based on the system architecture defined in D1.3,
for the outdoor dynamic crowd use case where the wireless backhaul meshed network
is a non-3GPP network. Note that compared to D1.3 we are adding an NL1' control
interface between UE and the MEHs located in the liquid RAN.
The NL1' interface can be used by the UE to inform the MEH (Mesh master) about the
UE's address and about SNR measurement results of neighbour APs, and subsequently
to request from the MEH (slave) the start of the MEC service. (In section 2.1.1.3, this
request is referred to as "send(ing) of the program state".) In the downlink direction it
can be used by the MEH (Mesh master) to inform the UE about the optimum AP and
the address of the MEH (slave) it shall use for accessing the network and receiving
MEC service.
Fig. 2.1.1-3 – Modified System Architecture of ODC (non-3GPP case)
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 21
Therefore, the 3GPP network provides the control plane for the UE to request the MEC
service from the MSF and trigger corresponding configuration of the MEHs in the non-
3GPP network, whereas the non-3GPP provides data plane to the UE and control plane
for the actual activation of the MEC service.
Fig. 2.1.1-4 shows the signaling procedure in more detail. Upon receipt of the request
for the MEC service, the MSF provides the MEH (Mesh master) with information
required for the local resource optimization, including e.g. the application for which
the MEC service type is requested. Furthermore, the MSF provides the UE with MEC
service info/assist info, including an address which the UE can use to send an Access
Request to the MEH (Mesh master) after performing association with a first AP. The
MEH (Mesh master) collects data, e.g. regarding the availability of computation
resources in the slave MEHs and the SNR associated to the signal sent by the UE,
when received by the associated APs.
When the MEH (Mesh master) performs local optimization, it determines the optimum
association between UE, AP and MEH (slave), configures the APs and MEHs
accordingly and informs the UE about the optimum AP the UE shall use.
The UE then performs re-association with the optimum AP and starts exchanging user
data with the application server (AS). The MEH (slave) is looped into the user plane
and waits for the request from the UE to activate MEC services.
Fig. 2.1.1-4 – Signaling for local optimization of ODC (non-3GPP case)
Information to be exchanged
It is now useful to give a hint about the needed information to be exchanged among
the involved entities to enable the optimization of the communication and computation
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 22
resources when an application is offloaded to a MEH. First of all, let us clarify which
entities are involved in the overall process:
1. UEs
2. APs
3. MEHs
These entities must exchange UE/application related parameters and access-/network-
side related parameters as described below:
UE/application related parameters:
a. The amount of information 𝑛𝑏 (number of bits) necessary to transfer the
application execution from the UE to the MEH, i.e. to activate the MEC service;
b. The computational burden of the application, i.e. the number of CPU cycles
required for its execution, say 𝑤 (CPU cycles)
c. The SNR associated to the signal sent by the UE, when received by the AP;
d. The latency requirement for the application to be offloaded, say 𝐿 (msec),
measuring the overall delay experienced by the UE from the moment it
launches an application remotely and the moment it receives the result back
from the MEH.
Although these parameters are UE or application related, they do not need to be
signalled by the UE in this format. For example, the computational burden of
compressing a video depends on the processor running the task, specifically on the
support of special vector operations or dedicated hardware accelerators. This
information is generally not known to the UE, but it can be stored locally in the MEH
(Mesh master) for each application for which the MiEdge RAN is supporting MEC
services. The same table can also include the number of bits required for the transfer
of application execution from the UE to the MEH (slave) and the latency requirement.
So it is sufficient for the UE to signal the type of application for which MEC services
are requested to the SMF only once, at the beginning. The SMF forwards this
information to the MEH (Mesh master) when it configures the MEH (Mesh master) to
perform resource optimization.
The only parameter that needs to be determined for each UE individually is c), the
SNR associated to the signal sent by the UE, when received by any of the APs. In
section 2.1.1.8 it is assumed that the SNR measurement results of neighbour APs made
by the UE and reported to the MEH (Mesh master) can be used to derive these
parameters.
Access and Network related parameters
If we consider a deployment consisting of sets of MEHs and APs, we assume that a
cluster head (MEH) has an overview of all the available computational and radio
resources of the MEHs and the mmWave APs belonging to its mesh. In that case, the
relevant parameters to be collected are:
a. Current computational load at each MEH (maximum clock frequency and
current computation load);
b. Backhaul delay between each mmWave AP and each MEH;
c. Quota of each MEH, i.e. the maximum number of users that each MEH can
accept.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 23
The above information has to be kept updated in a rather dynamical way, so to have
fresh knowledge on those relevant parameters. We assume that extracting network-
relevant parameters, such as backhaul delays for example, occurs at a time scale longer
than the time-scale implicit in the optimization of resource allocation.
In general, in the static formulation of computation offloading, the amount of signaling
to be exchanged to enable offloading is much less than the data to be transmitted. In
fact, the signaling only includes the few parameters mentioned above, which need to
be exchanged only when a request of offloading takes place. In Section 2.1.2.6, we
will consider how much the signaling increases in the dynamic formulation.
2.1.2 Jointly optimal resource allocation in dynamic scenarios with power
consumption and average delay trade-off
In this section, we elaborate further on the previous topic, introducing a dynamic
formulation of the problem, considering a scenario where applications create
continuously data to be processed. For instance, let us consider a face recognition
program that receives a video from a camera and elaborates it. Obviously, in this case,
data is continuously generated and the scenario is highly dynamic. Moreover, as
explained in this section, we do not assume any knowledge on the statistics of the data
generation process and the radio channels. In particular, data is stored in a local queue
at the UE before they are transmitted to a MEH through an AP. Similarly, data is stored
in a computation queue at the MEH side before being processed. Here we introduce
the concept of a total-queue composed by the queue at the UE side plus the queue at
the MHE side. To limit the delay of the application, we introduce bounds on the
average total-queue length and on the out-of-service probability, defined as the
probability that the total-queue length exceeds a certain threshold. We now briefly
introduce the state of the art to then present our contribution. This work is mainly
presented in [Mer19] and [Mer19-2]. A computation offloading strategy with UE
assignment based on matching theory was already presented in [D2.4], as part of
[Mer19-3].
State of the art
The dynamic formulation is investigated in [Mao17], where the authors aim to
minimize the long-term average power consumption under constraints on the mean
rate stability of the computation queues with a single MEH. The contribution [Mao16]
investigates the same problem introducing energy harvesting devices. In [Mer19-3],
the authors extend the work [Mao17], to the case of multiple APs and MEHs, devising
an algorithm based on matching theory with transfers with a penalty function
discouraging frequent handovers. In [Yang18] the authors consider a fog-enabled D2D
scenario and propose a strategy to associate mobile devices and offload tasks among
each other. The authors of [Sun17] address the problem of user assignment, with the
aim of minimizing the average delay under energy constraints, while introducing a
penalty function that discourages frequent handovers, and using Multi-armed bandit
to learn the optimal penalty parameter.
All the aforementioned works do not address the problem of dynamic computation
offloading while keeping the computation queues under a certain threshold in order to
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 24
limit the service delay, which is the approach proposed in this deliverable. To the best
of our knowledge, we are aware of only a few works that deal with this problem. In
fact, latency-constrained dynamic computation offloading was first addressed in
[Chen17], where the authors introduce a probabilistic constraint on the computation
queues, written as a bound on the probability of exceeding a certain value, handling it
with extreme value theory. Finally, the work in [Chen18] extends [Chen17] by
considering a scenario with multiple APs and MEHs, and introducing a UEs
assignment strategy based on matching theory.
Contribution
We propose a novel algorithm for dynamic computation offloading, aimed at
minimizing the long-term average power consumption under an average latency
constraint and a bound on the out-of-service probability, defined as the probability that
the overall service time (including communication and computation times) exceed a
certain value. The algorithm defines a policy for radio resource allocation and
scheduling at the MEH, based on the current state of communication and computation
queues. We consider the scenario where a UE offloads all its computations to a MEH
and there is no concurrent (UE/MEH) computation, to avoid continuous back and forth
exchange of program status between the UE and the MEH. We impose constraints on
the sum of the local queues (data to be transmitted from the UEs) and the remote
queues at the MEH (computations to be performed). This sum represents a proper
measure of the overall service delay. Our approach differs from what is proposed in
[Chen17], [Chen18], where constraints on local and remote queues are imposed
separately, and not jointly as in our case. In our case, we provide a truly joint
optimization of radio and computation resources in a dynamic fashion and we are able
to satisfy a constraint on the overall out-of-service probability. The proposed method
requires the solution of a convex problem in each time slot, so that it can be
implemented through efficient numerical tools [Boyd04]. Numerical results assess the
performance of our solution, illustrating how, in its simplicity, it guarantees out of
service probability and average delay constraints.
Scenario and problem formulation
Let us consider a scenario where 𝐾 UEs wish to offload computations to a MEH,
connected to a mmWave AP via a high capacity backhaul, as in the example of Fig.
2.1.2-1. Since we are dealing with a dynamic problem, time is divided in slots of equal
duration 𝛥. In each time slot 𝑡, new computation requests are randomly generated at
the UE side; the radio channel, denoted by ℎ𝑘(𝑡), can also vary over time.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 25
Fig. 2.1.2-1 - Scenario
Then, letting 𝑝𝑘(𝑡) be the transmit power of UE 𝑘, the maximum data rate between
UE 𝑘 and the AP is given by:
where 𝛽𝑘(𝑡) is the portion of the bandwidth allocated to UE 𝑘, 𝐵 is the total available
bandwidth, and 𝑁0 is the noise power spectral density. We consider a local (at the UE)
queue of bits to be transmitted and a remote (at the MEH) computation queue for each
UE (cf. Fig. 2.1.2-1). The local data queue of UE 𝑘, say, 𝑄𝑘𝑙 (𝑡), takes on input the new
data arrivals 𝐴𝑘(𝑡) , generated with random arrival times, and it is drained by
transferring data to the MEH via the mmWave AP, thus evolving as:
Similarly, the remote computation queue, say, 𝑄𝑘𝑟(𝑡), is fed by the data arriving from
the UEs and drained by the computation power of the MEH, and it evolves as follows:
where 𝑓𝑘(𝑡) is the total computation power (in CPU cycles/s) assigned to UE 𝑘 during
time slot 𝑡; and 𝐽𝑘 denotes the number of bits per CPU cycle, a parameter that depends
on the specific application required by UE 𝑘. The overall delay is then associated to
the sum of the time needed to send the data in the local data and the time to run all
computation requests associated to the remote computation queue:
Our goal is then to find an optimal resource allocation strategy in order to minimize
the long-term average power consumption at each UE, under constraints on the
maximum average queue length (which can be directly related to the average delay by
Little's law [Lit11]) and the out of service probability, i.e. the probability that 𝑄𝑘tot(𝑡)
exceeds a certain value. The problem can be formulated as follows:
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 26
where 𝛹(𝒕) = [{𝑝𝑘(𝑡)}𝑘, {𝑓𝑘(𝑡)}𝑘, {𝛽𝑘(𝑡)}𝑘]; 𝑄𝑘avg
and 𝑄𝑘max are the upper bounds on
the average queue length and on the maximum queue length for the out of service
probability, respectively. 𝜖𝑘 is the out-of-service probability, while 𝑃𝑘 and 𝑓max are the
UE transmit power budget and the computational power of the MEH, respectively. The
constraints have the following meaning: (a) imposes that the average queue length (i.e.,
the average delay) of each UE does not exceed a certain value; (b) ensures that the
probability for the total queue 𝑄𝑘tot to exceed a maximum value does not exceed the
required out-of-service probability; (c) ensures that the transmit power of each UE is
non negative and does not exceed a maximum power budget; (d) ensures that the
fraction of the bandwidth allocated to each user is non negative and is at most 1; (e)
guarantees that the sum of the allocated bandwidth to all UEs does not exceed the
available bandwidth; (f) forces the computation resources allocated to each UE to be
non-negative and not greater than the computation power of the MEH 𝑓max ; (g)
guarantees that the sum of the computation resources allocated to each UE is at most
equal to the computational power of the MEH.
Algorithm development
We tackle problem 𝒯 using tools from stochastic optimization [Nee10]. The starting
point, as in [Nee10], is to introduce virtual queues corresponding to constraints (a) and
(b) in 𝒯. The virtual queues quantify the degree of violation of the imposed constraints.
Denoting by 𝑍𝑘(𝑡) the virtual queue of UE 𝑘 associated to the first constraint, we can
write its evolution as follows:
To introduce the second virtual queue, we recast constraint (b) in the following
equivalent form:
where 𝟏{⋅} is the indicator function. Since the indicator function can be rewritten as
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 27
where 𝑢{⋅} denotes the unitary step function, the virtual queue 𝑌𝑘(𝑡) associated to the
second constraint of 𝒯 can evolves as
where 𝜇 is a step-size used to speed up the convergence of the algorithm. Note that the
use of the step size does not change the problem, since it comes just from the scalar
multiplication of both sides of the constraint by a factor 𝜇 . Having introduced the
virtual queues 𝑍𝑘(𝑡) and 𝑌𝑘(𝑡) for each UE 𝑘, the constraints (a) and (b) of 𝒯 can be
substituted by mean-rate stability constraints of the virtual queues as follows:
The algorithmic solution passes through the definition of the Lyapunov function
where 𝛩(𝑡) = [𝒁(𝑡), 𝒀(𝑡)] are the vectors whose elements are the virtual queues of all
UE’s. Then, the Lyapunov drift is defined as [Nee10]:
where the expectation is taken with respect to the channel and arrival rate realizations,
and it depends on the control policy. The Lyapunov drift defined in leads to the mean-
rate stability of the virtual queues [i.e., (a) and (b) above], but it can also lead to an
unnecessary power consumption. To balance the mean-rate stability and the long-term
average power consumption, we introduce the drift-plus-penalty function [Nee10],
which comprises the Lyapunov drift and a term including the objective function (the
transmit power in this case):
where 𝑉 is a control parameter used to balance the power consumption and the
Lyapunov drift. Using a stochastic optimization approach, our algorithm is based on
the concept of opportunistically minimizing an upper bound of the drift-plus-penalty
function in a per slot fashion. It can be shown that an upper bound of the drift-plus-
penalty function is given by [Mer19-2]:
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 28
where 𝐶 is a positive constant. Since the step function in is non-convex, we exploit its
closest convex upper bound
which is reminiscent of the hinge loss used in support vector machines [Sim18]. Then,
using this upper-bound, we obtain:
where 𝛥𝑅𝑘,max and 𝐴𝑘,max are upper bounds on the data rate and the data arrivals,
respectively. Thus, the algorithm proceeds by greedily minimizing instantaneous
values of the upper bound, thus obtaining the following dynamic control policy:
where 𝒵 is the set of feasible actions according to the constraints (c)-(g) of problem 𝒯.
It is easy to prove that problem 𝒪 is a convex optimization problem, but having a non-
differentiable objective function. To handle the non-differentiability, we first perform
a simple change of variable, in order to use the data rate 𝑅𝑘(𝑡) as a variable instead of
the transmit power 𝑝𝑘(𝑡). In particular, the transmit power can be written as
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 29
Then, exploiting the equivalent epigraph form [Boyd04], it is possible to show that
problem 𝒪 can be equivalently recast as [Mer19-2]
where 𝜴(𝒕) = [{𝑝𝑘(𝑡)}𝑘, {𝑓𝑘(𝑡)}𝑘, {𝛽𝑘(𝑡)}𝑘, {𝜉𝑘(𝑡)}𝑘, {𝛤𝑘(𝑡)}𝑘, {𝛷(𝑡)}𝑘] , and 𝛿𝑘 =𝐴𝑘,max + 𝛥𝑅𝑘,max(𝑡) − 𝑄𝑘
max + 1 . It is easy to see that problem 𝒫 is convex and
differentiable, and can be solved using powerful numerical tools as interior point
methods [Boyd04]. In fact, almost all functions in 𝒫 are linear, except for the
exponential term, which is the perspective function that is known to be convex
[Boyd04]. The overall dynamic procedure is described in Algorithm 2
.
Numerical results
In this section, we show the performance of our algorithm through numerical results
obtained by simulation in the MATLAB environment, using the fmincon function from
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 30
the optimization toolbox. Since problem 𝒫 is convex, fmincon converges to the global
optimal solution very efficiently. We consider a mmWave link with a path loss as in
[SAK15], an available bandwidth of 200 MHz, a noise power spectral density of -174
dBm/Hz, and a mmWave AP at the center of a square of size 100 m. The single MEH
has a computational power 𝑓max = 5 × 109 CPU cycles/s, and the parameter 𝐽𝑘 is set
to 10−1 bits/CPU cycle for all 𝑘. The maximum transmit power of each user is 𝑃𝑘 =500 mW, and each terminal is endowed with a planar array of 4 antennas. At the
receive side, the AP has an array of 16 elements. In Fig. 2.1.2-2, we show the tradeoff
between the average user queue length and the average user transmit power, comparing
our algorithm with the algorithm proposed in [Mao17], which requires only mean rate
stability of the sum of the computation queues. In this evaluation we considered a
scenario with 15 UEs with an arrival rate uniformly distributed between 0 and
𝐴𝑘,max = 6 × 105 bits. The requirements are 𝑄𝑘avg
= 3 × 106 bits, 𝑄𝑘max = 5 × 106
bits and 𝜖𝑘 = 10−2. Simulations are run for 2000 slots with 𝛥 = 10 ms, and are
averaged over 100 channel realizations, given by different positions of the UEs. For
the virtual queue 𝑌_𝑘(𝑡), we used a step-size 𝜇 = 1000. The power/delay tradeoff is
explored by letting the parameter 𝑉 vary along the curves reported in Fig. 2.1.2-2 (as
𝑉 decreases, the average power increases). As we can notice from Fig. 2.1.2-2, the
proposed method obtains a considerable gain with respect to the strategy in [Mao17]
in terms of queue length/power tradeoff. In particular, with the proposed method, the
average queue length approaches the maximum average requirement as 𝑉 increases,
whereas the algorithm in [Mao17] incurs in a much longer total user queue length for
a given power. Note that, since we imposed constraints on the average delay and on
the maximum queue length, our strategy does not arbitrarily decrease the power
consumption as 𝑉 increases, but it reaches a minimum power such that these
constraints are satisfied. The only drawback of increasing 𝑉 , and thus finding the
minimum power value, is the convergence time. On the contrary, the algorithm
proposed in [Mao17] can arbitrarily decrease the transmit power consumption at the
cost of a larger average queue length.
Fig. 2.1.2-2 – Average user sum queue length vs. long-term average power
consumption
10-3
10-2
10-1
Average user transmit power (W)
2.5
3
3.5
4
4.5
5
5.5
6
6.5
7
Qto
t (t)
106
Algorithm from [Mao17]
Proposed Algorithm
Qavg
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 31
As a further example, in Fig. 2.1.2-3 we show the behavior of the reliability function
defined as 1 − CDF(𝑄𝑘tot(𝑡)), where CDF(⋅) is the cumulative distribution function.
We consider 3 users with different 𝑄𝑘avg
, 𝑄𝑘max , and 𝜖𝑘 , running the simulation for
150000 slots, and considering 𝑉 = 4 × 1016. Each solid curve shows the probability
that 𝑄𝑘tot(𝑡) is greater than the value on the abscissa, while the dotted vertical lines
represent the maximum requirements 𝑄𝑘max, 𝑘 = 1,2,3. From Fig. 2.1.2-3, looking at
the intersections between the curves and the vertical lines, we can notice that all the
users meet the required constraint on the out of service probability. For instance, for
the blue curve, the requirement is not to exceed 𝑄1𝑚𝑎𝑥 for more than once every 10
time slots (1e-1 of outage probability). In fact, the intersection between the blue curve
and the green vertical line is exactly at (1e-1). This means that the probability is 1e-1
as required. The same comment is valid for all other UEs. Finally, in Fig. 2.1.2-4, we
show the instant value of the sum queue length for the 3 UEs with the same simulation
parameters of Fig. 2.1.2-3. In this figure we can notice the effectiveness of the
algorithm in terms of average queue length and, at the same time, the effect of the
bound on the out-of-service probability. Indeed, while the first UE requires an out-of-
service probability 𝜖1 and its queues often exceeds the prescribed threshold 𝑄1max UE
2 and UE 3 present much less peaks exceeding their thresholds, since they require a
much lower value of 𝜖𝑘. At the same time, the bound on the average queue length is
always met by all UEs.
Fig. 2.1.2-3 – Probability that the user sum queue length exceeds the value on
the abscissa
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 32
Fig. 2.1.2-4 – Instant value of the (sum) queue lengths vs. iteration index
Information to be exchanged for dynamic computation offloading
The dynamic formulation requires more exchange of information than the static case.
In the dynamic case, all information about the bits to be transmitted and the number of
computations to be run is embedded in the communication and computation queues.
There are a few parameters that need to be exchanged (mainly coming from the
specific application that is running) only when a new request of offloading arrives,
from the UE to the network:
1. Delay constraints of each application, i.e. 𝑄𝑘𝑎𝑣𝑔
𝑎𝑛𝑑 𝑄𝑘𝑚𝑎𝑥
2. The required out of service probability (𝜖𝑘)
3. The number of bits per CPU cycle (𝐽𝑘)
Then, if computation offloading takes place, within each time slot the parameters to be
exchanged between the UE and the edge cloud are the following:
1. The channel states (ℎ𝑘(𝑡))
2. The updated local queue lengths (𝑄𝑘𝑙 (𝑡))
All other information are available in the edge cloud and must not be exchanged over
the radio interface. Note that this information have to be exchanged at the beginning
of each time slot. The overall amount of signaling is in any case much smaller than the
amount of data to be exchanged.
2.2 Data prefetching algorithm
2.2.1 Overview
We assume a scenario composed of heterogeneous networks with limited backhaul
resource. In such environment, the method to allocate the limited resources greatly
affects the performance of the network. The prefetching of data to MEH in advance
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 33
will reduce access delay significantly, as will be seen in Section 2.2.3, which is
important for latency-sensitive applications.
Prefetching process
This section explains our approach to prefetch user data. It is important to select
appropriate indices of small cell base station (BS) s, user u and traffic n. They are
decided based on context information collected via the control plane (C-plane) of
macrocell BS. Fig. 2.2-1 the area surrounding a small cell BS s, where dotted line
denotes the backhauling toward the small cell BS on which content data are pre-
fetched in advance for a time window Tp before UE1 and UE2 really arrive at their
expected locations marked by red dots.
Fig. 2.2-1 - The area surrounding a small cell s
The Prefetching process steps are as follows:
1) Get user destination information via context information management
framework in the MEC service function (MSF proposed in D1.3).
2) Predict traffic and to-be-connected small cell BS 𝑠 at the destination, which is
defined as a BS which will maximize the UE’s SINR among BSs in the vicinity
of the expected destination of the UE. We assume that the value of SINR
(communication area) can be predicted by measurement and stochastic analyses
to make a power map beforehand.
3) Pick user within time window Tp as target to prefetch when user approaches the
destination.
4) Select user 𝑢 and traffic 𝑛 by dedicated prefetching algorithms explained later.
Prefetching algorithm
In step 4) of the prefetching process mentioned above, a combination of user u and
traffic n is selected by a specific prefetching algorithm. Fig. 2.2-2 shows the traffic
demand at the small cell s. The horizontal and vertical axes show time and traffic
demand, respectively. A user u demands data n whose size is Lu,n at time tu,n. Now, we
consider which traffic backhaul resource CB should be allocated at time t.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 34
Fig. 2.2-2 - Traffic demand at the small cell s
In our simulation we compare two algorithms. The first one is round robin (RR) which
randomly selects user u and traffic n. The second one is our proposed weighted
proportional fairness (WPF) algorithm, which sets an objective function considering
user context information and selects user u and traffic n to maximize it. The objective
function Ou,n(t) is defined as
,
, ,
p
,
,
( ) ( )( )
( )
u n
u n u n
u
u n
u n
LO t w t
B t
Tw t
t t
where Bu(t) is the allocated backhaul resource to user u until time t, wu,n(t) is a weight
coefficient taking into account the traffic generation time tu,n for user u and traffic n at
time t. wu,n(t) is a ratio of Tp against margin time defined as the difference between tu,n
and t. α is called the proportional fair (PF) coefficient, which changes the priority of
the weight coefficient. The objective function is selected to balance the trade-off
between prefetching priority of large-volume data and traffic of high urgency e.g. UE
are really approaching its expected destination.
2.2.2 Performance indices
We define two indices to evaluate performance of the proposed prefetching algorithm
toward the MEHs.
System rate
System rate R is defined as a total rate of all macro and small cell as follows,
S SM
SM
M S ,M SM S
remremS , ,M , ,
1 1 1s s,
min , min ,j s j
N JJu s ju j u su
j u s j uj s j
W CW C DLR
T T
M SM S
where WM and WS are the available bandwidth at macro cell and small cell. and
are the link capacities for user u, smallcell s, resp.. Ts is the timeslot width. rem
uL
is the instantaneous remaining traffic demand of user u. Ns is the total number of small
cell BSs. JM and JS are the number of sectors at macro cell BS and smallcell BS.
M,u jC
S, ,u s jC
MMj
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 35
is the set of users belonging to a sector jM of macro cell BS and is the set of users
belonging to a sector jS of small cell BS s. rem
,u sD is the data stored in storage for user
u and small cell s, which is expressed in detail as
rem rem
, B ,min ,u s u s uD C T L
where Tu,s is the total timeslot for user u at small cell s decided by the presented
prefetching algorithm. Namely, if not prefetching, limited backhaul restricts small cell
rate and system rate will decrease. However, if prefetching is applied, mmWave high
speed access will be released from the backhaul bottleneck, thus being able to operate
at its full capability. Therefore, the system rate is expected to be increased.
Delay on access
Delay on access τ is defined as the gap between time tu,n at which user demands traffic
and time tu,nend at which all of the demand traffic is delivered. The formula is
represented as follows,
end
, ,u n u nt t .
However, because a timeout is introduced in traffic model, the delay cannot exceed
this value.
2.2.3 Numerical results
Numerical simulation is conducted using our developed simulator presented in D4.1.
Fig. 2.2-3 shows the system rate achieved by the different considered algorithms
defined in previous section, for each backhaul capacity on the condition that the
number of users, the storage limit and the time window are 200, infinity and 500 s,
respectively. In the figure, red circle, green triangle and blue square show WPF
algorithm, RR and without prefetching, respectively. In case of zero or too small
backhaul capacity, traffic almost cannot be offloaded via small cell BSs thus the system
rate is equivalent to only macrocell rate which has about 100 Mbps. In the case of 10
Gbps or more, the system rate even without prefetching achieves the maximum rate.
This means prefetching function is not needed with sufficient backhaul, however, it is
unlikely from the viewpoint of current low optical fiber penetration rate in the world.
The most noteworthy point is at 1Gbps backhaul. The system rate with prefetching is
much bigger than that without prefetching. The result proves that it is possible to
improve deterioration of system rate thanks to effect of prefetching and storage. In
addition, the system rate with WPF algorithm is bigger than that with RR and achieves
about 95% of maximum rate achieved without prefetching at 1 0Gbps backhaul. The
results show the benefit of applying the proposed algorithm.
S,Ss j
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 36
Fig. 2.2-3 - System rate of the different algorithms
Fig. 2.2-4 - Avg. access delay of the different algorithms
Fig. 2.2-4 shows the average access delay of the different algorithms, as defined in
previous section for each backhaul capacity. Only macro cell (zero backhaul capacity)
cannot accommodate most of large demand traffic expected in the next 10 years and
the delay becomes equal to the timeout mentioned above. In particular, WPF with 1
Gbps backhaul capacity reduces to about 33% of the delay without prefetching.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 37
3 Load distribution and clustering via distributed pricing mechanisms
This section includes contributions on load distribution and clustering algorithms. In
particular, the first part presents novel algorithms to distribute the computational load
among different MEHs, grouping them in small clusters/federation in order to meet
latency requirements of UEs with low power consumption. Then, we will describe an
algorithm for the dynamic ON/OFF of the mmWave edge cloud, to switch off APs
when necessary to minimize the system’s energy consumption.
3.1 Load distribution
This subsection introduces novel algorithms for the computational load distribution
among MEHs. Our goal is to develop optimal strategies to group the APs, endowed
with MEHs and connected among them through a mmWave backhaul network, into
clusters that can efficiently execute the computation tasks offloaded by UEs in a
parallel and distributed way. MEHs’ federation within a cluster has to face many
challenging limitations such as radio and computational resources availability, delay
constraints, and power consumption. Then, we tackle the issue of joint computational
load distribution and communication resource allocation within the MEC clusters to
efficiently execute the computational tasks. We first devise in the single user case
alternative strategies of clustering aiming at minimizing jointly the serving time, the
cluster size and the transmit power consumption. Then we investigate the multi-user
offloading scenario by showing the considerable performance gain ensured by jointly
optimizing the computational and communication resources and the MEHs federation
in clusters.
3.1.1 State of the art
The selection of computing nodes that are federated for computation offloading or
computation caching can significantly influence not only the execution delay but also
the power consumption of the computing nodes. In [OUEIS14] has been analyzed the
impact of the MEC cluster size (i.e., the amount of the small cells endowed with cloud
functionalities performing computing), its topology and the capacity of the backhaul
link on both execution latency of the offloaded application and the power consumption
for computation offloading. The paper shows that an increasing number of the
computing nodes does not always shorten execution delay. Since the computational
resources of MEC servers are limited, a strategy to enhance the computational
capabilities is to group AP-MEHs into computation clusters. If the cloud resources of
the AP-MEH serving a UE are not enough for computing the offloaded task, the
serving AP-MEH has the possibility of distributing the computation load among
neighbor AP-MEHs. Many offloading strategies and methodologies focused on
application partitioning for offloading decision purposes. Graph-based model are used
to partitioning the computation tasks [SMIT12], [WANG04], [VERB13]. In
[KHAl],[AZIM16] map reduce type call graphs are used to split a computation
workload into mapping tasks. Some works [GARG11], [NATH10], [VERMA08],
proposed heuristics algorithms to solve the VM placement problem. A further approach
is to formulate the offloading problem as a Markov Decision Process (MDP)
[LIANG12], [LIANG12-2], [DIVA14].
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 38
3.1.2 Contribution
To the best of our knowledge, only a few works dealt with the computation partitioning
problem in the multi-users case [YANG_CAO15], [YANG12], [YANG12-2], [JIA16],
[YAO17]. Our contribution is to propose strategies for intra-cluster joint
communication and resource management for computation offloading. Resource
management consists of computation load distribution, radio resource allocation, and
computational capacities assignment. In particular, we consider a scalable architecture
where computations can be performed either locally, at the UE if endowed with
sufficient computing capabilities on it, or in a cluster of AP-MEHs depending on
resource availability, latency constraints and energy consumption. Enabling the
formation of federation or clusters of computing resources allows computational load
to be distributed among several MEHs, which further reduces computing latency. We
assume that AP-MEHs nodes communicate and exchange data through mmWave links
enabling high speed wireless backhauling among APs to guarantee short serving time.
3.1.3 Scenario and problem description
Let us consider the edge-cloud network illustrated in Fig. 3.1-1 composed of densely
deployed mmWave AP-MEHs pairs. The nodes can communicate through point-to-
point wireless backhaul connection, assumed in D1.3 of this project, and each of them
is able to provide both the radio access and the computation resources to a set of UEs.
Each UE requests a computational task offloading to the Serving AP-MEH node (SAP-
MEH) which can serve multiple UEs. We do not tackle the problem of user association
with the SAP-MEH so that UEs are already connected to one serving AP-MEH node
to which they can send their computation requests. Furthermore, we do not deal with
the offloading decision process at the UE side, and, therefore, we suppose that the UEs
have already taken the computation offloading decision. We assume, as in [KHAl],
[AZIM16], that the application to be offloaded is splittable into constituent subtasks to
be processed in parallel by using map-reduce type call graphs. The computation task
is characterized by a set of bits/instructions to be processed/computed under some
latency constraint dictated by the handled application. The goal of each SAP-MEH is
to solve the computation task request without violating the latency constraint.
Depending on the system state and its available resources, each SAP-MEH may decide
to either compute UE’s computational task locally (i.e. using its own computational
resources), or build a cluster of helper MEHs in order to distribute computation among
them. An AP-MEH node contributing in a computation cluster for a task coming from
another SAP-MEH is referred to as helper MEH. Each SAP-MEH has to set up a
computation cluster to accommodate the set of computation tasks coming from
different UEs.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 39
Fig 3.1-1 – Multi-user mmWave edge cloud scenario
3.1.4 Single user case
In this section we focus on the single user scenario, where a UE offloads a computation
task to its serving SAP-MEH node. We consider a set 𝒩 ≜ {1, … , 𝑁} of 𝑁 AP-MEHs
each one endowed with a total computational capacity of 𝐹𝑛 [CPU cycles/sec], for 𝑛 =1, … , 𝑁 . The serving AP-MEH 𝑠 receives a computation task for running 𝑊 CPU
cycles. The maximum time within which the UE wishes to run the application is
denoted with 𝛥app. The SAP-MEHs may choose either to compute the request locally,
or to establish a computation cluster. The computation request, defined by the pair
(𝑊, 𝛥app) , can be satisfied by only using local resources on SAP-MEHs if the
following latency constraint holds:
where the left side represents the minimum computation time that can be achieved at
the serving node. In this case, the overall computational capacity 𝐹𝑠 of the SAP-MEH
should be allocated for the computation of the request. On the other hand, if the above
equation does not hold, then the SAP-MEH tries to form a computation cluster in order
to distribute the computation load. Each of the AP-MEHs in the cluster is accorded a
fraction of the computation load. However, to guarantee service delivery to the mobile
user, resources should be adequately optimized. Therefore, the SAP-MEH has to: i)
choose which AP-MEH nodes to include in the computation cluster and, then,
distribute the computational load among these Helpers; ii) allocate computational
resources at each Helper; iii) manage communication resources for sending and
retrieving necessary data from SAP-MEHs to Helpers and vice versa. Nevertheless,
the SAP-MEH may optimize a clustering process to compute the offloaded task even
if the condition is verified, depending on the strategy adopted for computing each UE’s
request. In the sequel, we propose several strategies for AP-MEHs computation load
partition, which are able to cover different type of applications and application
requirements.
SAP-MEH
Helper
MUE1 MUE2
MUE3
SAP-MEH
SAP-MEH
Helper
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 40
Latency minimization
Our goal in this section is to find the optimal computation load distribution among the
AP-MEHs in the cluster in order to minimize the service latency. In general, the total
overall service latency is measured from the time the request is received by the SAP-
MEH until all data are computed and received back to the SAP-MEH. Then,
where 𝑁 is the number of helper MEHs that can be part of the computation cluster;
𝛥𝑐𝑜𝑚𝑚𝑠𝑛 = 𝛥𝑈𝐿
𝑠𝑛 + 𝛥𝐷𝐿𝑠𝑛 is the sum of the time 𝛥𝑈𝐿
𝑠𝑛 needed for transferring the program
execution from the SAP-MEH 𝑠 to Helper 𝑛, plus the time 𝛥𝐷𝐿𝑠𝑛 necessary to send back
the result to SAP-MEH 𝑠. We assume that when the computation runs on the SAP-
MEH (i.e. 𝑛 = 𝑠), there is no communication delay so that 𝛥𝑐𝑜𝑚𝑚𝑠𝑠 = 0. The delays
𝛥𝐷𝐿𝑠𝑛 and 𝛥𝑈𝐿
𝑠𝑛 depend respectively on the number of bits 𝑁𝐷𝐿𝑛 and 𝑁𝑈𝐿
𝑛 to be sent and
received at helper MEH 𝑛 . These numbers are related to the computation load 𝑊𝑛
allocated to each helper MEH 𝑛 through the following equations:
where 𝜃𝐷𝐿 and 𝜃𝑈𝐿 are coefficients specific of the application that is going to be
offloaded: a small value of these coefficients refers to applications that require the
transfer of few bits, for a given computational load 𝑊𝑛. Clearly, offloading is more
effective for those applications for which 𝜃𝐷𝐿 and 𝜃𝑈𝐿 are small. In the sequel, we
suppose, for simplicity, that both SAP-MEH and helper MEH transmit with the same
power 𝑝𝑠𝑛 and that channel reciprocity holds. Additionally, we assume high data rate
mmWave backhaul connecting the AP-MEHs nodes, and, under Line Of Sight (LOS)
conditions, we use Friis formula to model the path loss [MUD09]. Then, the overall
transmission time can be written as:
where 𝜃 =𝜃𝑈𝐿+𝜃𝐷𝐿
1−𝑃𝐸𝑅 , 𝐵𝑠𝑛 is the bandwidth allocated for transmitting data between
SAP-MEH 𝑠 and helper MEH 𝑛 ; 𝑝𝑠𝑛 is the transmit power which here we assume
equal to its maximum value 𝑝𝑚𝑎𝑥; The channel response is
where ℎ𝑠𝑛 and 𝑑𝑠𝑛 are the channel coefficient and the distance between SAP-MEHs
and helper MEH 𝑛, respectively; 𝑑0 is the far field reference distance; the coefficient
𝜈𝑠𝑛 is defined as
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 41
where 𝜁 incorporates some efficiency terms and the antennas gains, 𝜆 is the
wavelength associated to the carrier frequency, and 𝛽 is the atmospheric absorption
coefficient; 𝛤(𝐵𝐸𝑅) represents the SNR margin for meeting a target 𝐵𝐸𝑅, and 𝑁0 the
noise power. Finally, 1
1−𝑃𝐸𝑅 denotes the average number of retransmissions for
ensuring a target packet error rate 𝑃𝐸𝑅 , by assuming independent errors on each
packet. The packet error rate depends on the bit error rate 𝐵𝐸𝑅 and the transmission
packet size 𝑙𝑠 according to the following equation
Finally, the delay 𝛥𝑐𝑜𝑚𝑝𝑠𝑛 in the latency term represents the time spent by the helper
MEH 𝑛 to execute 𝑊𝑛 CPU cycle. This term depends on the load distribution in the
cluster, and on the computational capacity allocated at each node of the cluster.
Denoting by 𝑓𝑛 the computational capacity allocated by AP-MEH 𝑛, the computation
time 𝛥𝑐𝑜𝑚𝑝𝑠𝑛 is defined as
The strategy we propose in this section aims at finding the optimal load distribution
among AP-MEHs involved in the computation in order to minimize the cluster latency.
This kind of strategies could be requested by the UE to increase the performance
without imposing power consumption constraints nor cluster size limitations. For these
reasons, the system is forced to include all of the active and reachable AP-MEHs in
the computation cluster. This cluster latency depends on the computational load
through the computation time at the involved AP-MEHs, and on the channel quality
through the communication latency. For this strategy, we assume that the SAP-MEH
communicates with the helper MEHs in the cluster by using the same maximum
transmission power, i.e. 𝑝𝑠𝑛 = 𝑝𝑚𝑎𝑥. This implies that all transmission links are fully
used in order to maximize the effective throughput and decrease the total experienced
latency. Then, the optimization problem can be formulated as follows:
where we define 𝐴𝑛 =1
𝑓𝑛+
𝜃
𝐵𝑠𝑛 log(1+𝑎𝑠𝑛𝑝𝑠𝑛 ) if 𝑠 ≠ 𝑛 and 𝐴𝑛 ≜
1
𝑓𝑛 for 𝑛 = 𝑠 . The
solution of 𝒫ℬ1 leads to a load distribution among all active computation nodes, in a
way that makes uniform the experienced latency at each node. This is intuitive since if
two AP-MEHs do not experience the same latency, then we can always adjust the load
distribution in order to decrease the higher latency and increase the lower one to have
a smaller maximal value. Problem 𝒫ℬ1 is a non-smooth problem. However, to find its
optimal solution, we can introduce the auxiliary slack (real positive) variable 𝑡 ≜max𝑛∈𝒩
𝐴𝑛𝑊𝑛, and then solve the following equivalent problem:
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 42
which admits a closed form solution as stated in the following theorem:
Theorem 3.1-1. The convex problem 𝒫ℬ1 ̅̅ ̅̅ ̅̅ and its optimal solution is given by
The proof can be found in [OUEIS19]. Note that the proposed latency minimization
strategy may in some cases result in assigning very small computation loads to those
helpers that experience a very bad communication channel quality with the SAP-MEH,
and, then, take a long time for receiving and transmitting data. Even if energy
consumption is not our goal, this kind of situation where a lot of energy is spent for a
very small amount of computation could be avoided by finding the optimal energy-
latency tradeoff or, alternatively, by adding a pre-selection step that limits the number
of participating Helpers. However, if the main goal is to guarantee quality of the
service and to serve the UEs’ tasks regardless of the energy cost, 𝒫ℬ1 is able to deliver
the optimal solution in closed form.
Cluster sparsification
In this section we aim at developing a selection strategy to reduce the cluster size, and
eventually its energy consumption, by removing helper MEHs that execute very small
computational tasks. The approach we follow can be seen as a sparsification of the
solution resulting from solving problem 𝒫ℬ1 since we force to zero the computational
load of some helper MEHs. To reduce the cluster size we choose as
cost function the 𝑙0 norm of the load vector 𝑾 = [𝑊1, … , 𝑊𝑁], which associates zero
cost to every non used AP-MEH and unit cost to those involved in the computation
cluster. Then the optimization problem can be cast as follows:
where the first constraints guarantee that the maximum latency dictated by the
application is met by each MEH, while the other constraints ensure that the whole task
will be computed. Although problem 𝒫ℬ2 is non-convex due to the non-convexity of
the 𝑙0 -norm, it admits a closed form solution, as shown in detail in [Oue19]. In
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 43
particular, the solution tends to include in the cluster, the helper MEHs that experience
lower latencies, and then, can support larger computation tasks.
Minimization of Cluster Power Consumption
The strategies proposed in the previous sections aim at minimizing the latency or the
cluster size without taking into account the cluster power consumption. Power
consumption is indeed an important issue in MEC networks where the edge cloud
servers are typical femtocell base stations. Our main goal in this section is to exploit
the latency-power consumption trade-off in order to minimize the transmit power in
the cluster while keeping a good quality for the service. Communication power
consumption can be optimized depending on channel quality, computational capacity
offered by each MEH and application latency constraints. Then in the following we
jointly find the optimal transmit powers 𝑝𝑠𝑛 and the percentage of computation load,
𝑊𝑛 accorded to each helper MEH, which minimize the cluster power consumption. We
assume that at the SAP-MEH is assigned the highest load of computation according to
its computational capacity with no communication cost (𝑝𝑠𝑠 = 0). In the case where
computational resources at the SAP-MEH are sufficient for computing the whole
request without violating the latency constraint, i.e. if 𝑓𝑠𝛥𝑎𝑝𝑝 ≥ 𝑊 , the load is
accorded to the SAP-MEH. Otherwise, if 𝑓𝑠𝛥𝑎𝑝𝑝 ≥ 𝑊, the SAP-MEH load will be
equal to 𝑊𝑠 = 𝑓𝑠𝛥𝑎𝑝𝑝. Therefore, to allocate the remaining computational load 𝑊 −
𝑊𝑠, the following optimization problem is solved:
In [OUEIS19] the optimization problem and its solution is described in details. In this
case, the solution tends to assign high computation loads to Helpers with larger
computational capacities and better communication channels.
Minimization of the maximum transmit power
The previous optimization strategy aims at minimizing the overall communication
power consumption. However, it could happen that helper MEHs with very high
computational capacity suffer a high power consumption. In this problem, we address
selfish minimization of the transmit power consumption under application latency
constraints. The optimization problem can be formulated as follows
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 44
The solution of this problem will tend to accord to all helper MEHs an equal power
consumption. If any helper MEH has a greater power consumption than the others, the
load distribution can be modified to decrease the maximal power consumption value.
Note, that this strategy will most likely increase the overall cluster power consumption
comparing to 𝒫ℬ3.
3.1.5 Joint allocation of computation load and radio resources: Multi-user
Case
In this section, we consider the more general case of edge cloud networks where the
AP-MEHs serve multiple users. Despite the fact that almost all previous works
assumed that the cloud computational capacities are sufficient to meet the computing
users' tasks, such some assumption made for the single user case are not true for
multiple users in edge clouds, where APs are femto base stations with limited power
and computational capacities. In the case where there are a lot of users, SAP-MEHs
may receive concurrent requests at the same time and this requires a joint, optimal
allocation of the computational and radio resources among all users in order to
guarantee the application requirements. In this section, we investigate the problem of
the multi-user computation load partitioning and radio/computational resources
allocation to minimize the cluster power consumption under application latency
constraints. The wireless mmWave links among AP-MEHs allow us to consider
interference free transmission. We denote by 𝒦 the set of 𝐾 active users and by 𝒮 the
set of SAP-MEHs. Each user 𝑘 is served by a SAP-MEH 𝑠 ∈ 𝒮. Each SAP-MEH 𝒮
serves users in set 𝒦𝑠 so that 𝒦 = ⋃ 𝒦𝑠|𝒮|𝑠=1 . Each user 𝑘 ∈ 𝒦 sends a computation
request defined by (𝑊𝑘, 𝛥𝑘 ) to its SAP-MEH 𝑠 ∈ 𝒮, where 𝛥𝑘 denotes the maximum
latency for user 𝑘. We denote by: 𝑤𝑘𝑛 and 𝑓𝑘𝑛, respectively, the computation load and
the computation capacity allocated at AP-MEH 𝑛 for computing user's 𝑘 request; 𝑝𝑠𝑛𝑘
the transmit power used to exchange computational data of user 𝑘 between SAP-MEH
𝑠 and Helper 𝑛 . Denote by 𝒑 ≜ (𝑝𝑠𝑛𝑘 )∀𝑘,𝑛≠𝑠 , 𝒇 ≜ (𝑓𝑘𝑛
𝑘 )∀𝑘,𝑛
, 𝒘 ≜ (𝑤𝑘𝑛𝑘 )
∀𝑘,𝑛 , the
transmit powers, the computational rates and loads allocated to each UE, respectively.
The optimization problem can be formulated as follows
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 45
where
Problem 𝒫𝑐 is convex and can be easily solved via efficient numerical tools. From
problem 𝒫𝑐, from 𝒘 it is possible to understand how the cluster is formed. In particular,
only the MEHs 𝑖 with 𝒘𝒊 different from 0 are included in the cluster, while the others
do not take part in the computation. Then, for each user, the cluster of MEH performing
the computation is determined by the solution of problem 𝒫𝑐.
3.1.6 Numerical results
In this section we present some numerical results to assess the effectiveness of the
proposed optimization strategies. We will first focus on the single user case to
investigate the minimum latency, minimum power and clustering size optimization
strategies. Therefore, we consider the multi-user case by showing as the joint
optimization of the computation load, and of the radio and computational resources
leads significant performance improvements with respect to disjoint optimization
approaches.
Single user case
As simulation scenario in our numerical experiments we considered, a street canyon
of 20 m width and 100 m length where the AP deployment on each side of the road
follows a homogeneous Poisson point process of intensity 𝜆. The SAP-MEH is chosen
as the nearest AP to the centroid of the overall set of APs. Helper MEHs whose distance
from the SAP-MEH is less than 40 meters are classified as Near Helpers, whereas the
remaining ones are referred as Far Helpers. The selected values for the simulation
parameters are: 𝑃𝑚𝑎𝑥 = 0.1 W, 𝜃 = 90 , Δ𝑎𝑝𝑝 = 8 msec, 𝑊𝑘 ∈ [104, 2 × 104] , 𝐹𝑛 ∈
[106, 2 × 106]. As path loss model for the mmWave links, we used the measurement-
based outdoor propagation model introduced in [SAK15], [WEI14]. We averaged the
results over 100 random realizations of the APs' positions.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 46
Resource partition among helpers
In Fig. 3.1-2 we compare the latency and power minimizing strategies, solving
problems 𝒫ℬ1 and 𝒫ℬ3, respectively, by showing how much load can be allocated to
the SAP-MEH and Helpers. We consider as performance metric the fraction of the
computation load 𝜂𝑤 ≜ ∑ 𝑊𝑛/𝑊𝑛∈�̅� where denotes �̅� alternatively, the SAP-MEH
index 𝑠, the set of active Near or Far Helpers, defined as ℋ𝑛, ℋ𝑓, respectively. In Fig.
3.1-2 we plot the average coefficient �̅�𝑊 versus the average percentage 𝒳𝑛𝑒𝑎𝑟 of Near
Helpers among all active Helpers, defined as 𝒳𝑛𝑒𝑎𝑟 ≜ 𝐸[|ℋ𝑛|/|𝒩|] . It can be
observed that the solution of 𝒫ℬ1 tends to better take advantage from all active
Helpers by giving larger computation tasks to both Far and Near Helpers than 𝒫ℬ3.
Furthermore, 𝒫ℬ1 assigns a higher computation load to Near Helpers in order to
achieve a lower maximum latency, since they usually have better channel conditions.
The solution of 𝒫ℬ3, whose objective is to minimize the cluster power consumption,
assigns as much as possible computation load to the SAP-MEH, because its transmit
power consumption is zero.
Fig. 3.1-2– Load distribution on SAP-MEH, near and far helpers for latency and
power minimization algorithms
As further figure of merit, we consider the latency gain with respect to the maximum
latency 𝛥𝑎𝑝𝑝 defined as 𝐺𝐿 ≜𝛥𝑎𝑝𝑝−𝛥
𝛥 where 𝛥 is the overall latency. In Fig. 3.1-3, we
plot the averaged latency gain �̅�𝐿versus the coefficient 𝒳𝑛𝑒𝑎𝑟. It can be noted that 𝒫ℬ1
has the largest latency gain, and the optimization strategies whose goal is minimizing
the power consumption, i.e. 𝒫ℬ3 and 𝒫ℬ4, do not achieve any latency gain. In fact,
these strategies take advantage of all the available time window in order to further
reduce power consumption by pushing the latency-power consumption trade-off to its
limits. Fig. 3.1-3 also shows that when we tend to sparsify the solution in order to
eliminate Helpers with very low computation tasks by solving 𝒫ℬ0 we lose in terms
of latency. However, this loss is traded with power consumption as can be seen in Fig.
3.1-4, where we plot the averaged transmit power consumption gain 𝐺𝑝 =∑ (𝑝𝑚𝑎𝑥−𝑝(𝑖))𝑖
𝑁𝑝𝑚𝑎𝑥
versus the coefficient 𝒳𝑛𝑒𝑎𝑟.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 47
Figure 3.1-3 - Latency gain of the proposed algorithms.
Fig. 3.1-4. – Power consumption gain for different strategies
We can notice that the power gain achieved by 𝒫ℬ2 is considerable for all helpers
distribution. Since in the case of 𝒫ℬ2 we assumed that transmission power is constant
and equal to 𝑝𝑚𝑎𝑥, the gain in power consumption comes only from the reduction of
the cluster size. For 𝒫ℬ3 and 𝒫ℬ4, transmission power consumption can be controlled,
then the power consumption gain in these cases is a result of both transmission power
adaptation and cluster size reduction.
Multi-user case
The solution of problem Pc jointly forms computation clusters for all users. To evaluate
the performance of the joint clustering optimization, we compare it to the case where
all requests are handled by the SAP-MEH (No Clustering). Fig. 3.1-5 shows the
average power consumption per user in the computation clusters versus the maximum
number of users per SAP-MEH. We can observe as the average power consumption
increases with the number of users to be served, since each user is forced to use less
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 48
of the SAP-MEH computational capacity and to offload more computation to Helpers.
Furthermore, if we increase the minimum latency Δ𝑚𝑖𝑛, defined as Δ𝑚𝑖𝑛 = minkΔ𝑘,
then our clustering strategy is able to achieve very low power consumption. It can be
observed that the transmit power consumption of the No clustering strategy is zero
since it assigns the user computation load to the serving SAP-MEH.
Fig. 3.1-5 – Average user power consumption vs. maximum number of users per
SAP-MEH
However, this power gain is traded with a lower number of accommodated users as
shown in Fig. 3.1-6, where we plot the percentage of satisfied users versus the
maximum number of users for SAP-MEH. A user is satisfied if its computation request
result is delivered without violating the imposed latency constraint. In order to evaluate
this percentage, we try to solve the optimization problem with the total number of
active users in the network. In case of failure of reaching a solution, users that request
higher computation load are eliminated one by one until all considered users are
satisfied. The satisfaction ratio is evaluated for an increasing number of possible active
users per AP-MEH. In Fig.3.1-6 we show as by increasing Δ , the satisfaction ratio
improves as well, since a higher number of users can be served. Furthermore, the
proposed joint optimization strategy performs better than the No clustering method
when the latency constraint is more stringent by taking advantage of the computational
resources of the clustered helpers.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 49
Fig. 3.1-6 – User satisfaction ratio vs. maximum number of users per SAP-MEH
3.2 Dynamic ON/OFF strategies
In this section, we consider the network architecture as shown in Fig. 3.2-1 where
mmWave edge cloud can be switched on and off in adaptation to the forecasted data
traffic demand. The architecture is a heterogeneous network (HetNet) that is composed
of several mmWave small cells overlaid on top of a conventional macrocell
deployment. The macrocell BS collects context information such as user mobility and
traffic in the C-plane and deals with small and real-time traffic in the user plane (U-
plane). The small cell BS deals with large traffic in the U-plane, so that the utilization
of mmWave high speed access is needed. The optimization presented in this section
will help to find a suitable clustering of users to be serve by the macro cell’s MEH or
the small cell’s MEH. Furthermore, small cells’ MEH can be switched off when
necessary to minimize the system’s energy consumption, within UE’s prescribed
latency constraints.
Fig. 3.2-1 - Illustration of 5G cellular network using mmWave edge cloud
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 50
3.2.1 Data traffic demand forecast for load distribution and clustering
User traffic measurements are used to forecast the statistics of the traffic demand of
UE in the actual environment. The measurement data were provided by an operator as
follows. The total hourly traffic, L, is measured in 100 meters100 meters areas.
2000 ..., 200, 100, 0,
3000 ..., 200, 100, 0,
23 ..., 1, 0,
:for
kbpsin ,,
y
x
T
TyxL
(3–1)
An example of the measured total traffic in one hour from 10:00 to 10:59 (T=10) is
shown in Fig. 3.2-2. A few high load areas are recognizable in the whole area.
Fig. 3.2-2 - An example of the measured total traffic in one hour
By interpolation on the hourly total traffic measurements, the total traffic distribution
at each time is estimated in:
59):23,0[ :for kbpsin ,, ttyxL (3–2)
The number of UEs in that area at the considered time instant, NUE((x,y),t), is estimated
based on the instantaneous total traffic load, L((x,y),t), and the statistics of the traffic
load of each UE. The traffic load of each UE, can be considered to be fixed (equal to
the average traffic load) or dynamic (stochastic process according to the model).
Accordingly, the instantaneous number of UEs can be modeled either statically or
dynamically as shown in Fig. 3.2-3.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 51
Fig. 3.2-3 - The number of UEs in the 100 meters 100 meters square area at
point (1100,600)
The packet generation of each UE is considered to follow a Poisson process with 8 sec.
average time interval between packets and the length of each packet follows the
Gamma distribution plus a constant bias. The probability density function (PDF) of
the Gamma distribution is defined as follows:
k
x
k
kxxf e
)()(
/
1
(3–3)
where the parameters of the PDF of the packets’ lengths are listed in Table 3.2.1.
Table 3.2.1 - Parameters of the packet length distribution
Shape parameter, k 0.2892
Scale parameter, 2.012105
Traffic bias 4 kbps
Figure 3.2-3 shows the number of UEs in the 100 meters 100 meters square area
located at point (1100,600) for both dynamic and static models.
Traffic demand’s latency constraints
We present the delay tolerance of each traffic in this section. Each UE’s traffic demand
(measured in bps) generated based on the fitting model explained above is associated
with a specific delay tolerance. The value of the delay tolerance is referred from QCI
(QoS Class Identifier) defined in LTE. More specifically, based on their volume, the
generated traffic in this paper is classified into one among five categories (i.e. VoIP,
video call, game, streaming and other TPC based applications) with their associated
value of delay tolerance defined in [TS23.303]. One result of generated traffic
classification in ascending order of traffic volume is visualized in Fig. 3.2-4, where the
x-axis represents different traffic demands of different UEs and the y-axis shows the
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 52
value of the corresponding generated traffic. The color of the traffic represents the
corresponding category of a traffic based on the aforementioned classification process.
Fig. 3.2-4 - Traffic classification and corresponding delay tolerance.
3.2.2 Optimization problem
The optimization presented in this section will help to find a suitable clustering of users
to be served by the macro cell’s MEH or the small cell’s MEH. Furthermore, small
cells’ MEH can be switched off when necessary to minimize the system’s energy
consumption, within UE’s prescribed latency constraints. The objective function is to
improve the system rate over consumed energy of the system. As for BS deployment,
general hexagonal structure with three sector macro cells is assumed where the macro
BS is located at the center of hexagonal structure and mmWave edge clouds are
overlaid on the macro cells randomly. The optimal cell association to the MEH in edge
cloud or to that at the macro cell will help to improve spectral efficiency. The optimal
ON/OFF status of MEHs will help to improve energy efficiency. In order to maximize
the system performance, the system rate over consumed energy r in bits/J which is
defined as the ratio between the total system rate and the total system consumption
power of all BSs is introduced as follows r :
S1SB
1
,SM,MS
,min ,min
PnP
LCW
LCW
ρ
n
s u
us
su
u
uu
s
SM
SM
(3-4)
where MW and sW are the available bandwidth for macro cell and small cell
respectively. M,uC and suC , are link capacity from macro and s-th small cell. M and
sS represent the number of users belong to macro BS and s-th small cell BS
respectively. SN is the total number of small cell BSs deployed within one macrocell
area. uL is predicted traffic demand of user u. This system rate definition expresses
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 53
the balance between achievable rate and traffic demand. If the achievable rate is much
higher than the traffic demand, the user rate equals to the traffic demand and vice versa.
nS £ NS is the total number of activated small cell BSs and NS
is the total number of
small cell BSs deployed within one macro cell area. Lu in bps is a traffic demand of
user u . PB = PM1 +PM0 +NSPS0is the network's baseline consumption power which is
contributed from PM1 (macro BS's surplus consumption power when activated), PM0
(macro BS's power consumption when idle) and PS0 (small cell BS's consumption
power when idle). PS = PS1 +PS0 is the consumption power of an edge cloud where PS1
is the surplus consumption power when the small cell BS is activated.
On the offloading point of view, MEH should accommodate appropriate UEs to
improve system rate efficiently e.g. edge cloud MEH accommodates UE of high traffic
demand. On the energy saving point of view, small cell BSs should be switched off as
many as possible at the expense of degrading the system rate. For a fixed number of
activated small cell BSs, the denominator of r becomes a constant. Therefore, the
joint optimization problem can be decomposed into finding the optimal user
association or clustering to maximize the total system rate for a fixed set of activated
small cell BSs and finding the optimal set of activated BSs to maximize the system
rate over consumed energy.
Furthermore, to guarantee UE’s latency constraint, for each cluster of UEs associating
with MEH in the edge cloud or smallcell BS which are activated, we optimize the
time resource to guarantee UE’s demanded traffic are delivered within their required
latency tolerance. This requirement for each sub-problem at the s-th MEH can be done
via minimizing the following “local traffic consumption” evaluation function at the s-
the activated MEH which is determined by the maximization of (3-4).
su
u
usu
u
u
uu
s LT
CWL
T
CW
S SM Ms
,SM,M (3-5)
subject to a certain set of constraints [GIA18] where u denotes the time resource to
the u-th UE, MT and ST are the length of one time frame of the macro BS and the small
cell BS respectively, sM and sS denote two clusters of UEs within the area of the s-th
small cell associated with the macro MEH and edge cloud MEH respectively. Similar
to the ON/OFF status of edge clouds, the decision of the two clusters depends on the
maximization of (3-4). The optimization problem in (3-5) attempts to minimize the
gaps between the demanded traffic and the available access link throughput.
3.2.3 Numerical analysis
Simulation setup
Macro BSs are deployed in the hexagonal grid and uses 2GHz band. On the other hand,
small cell BSs are deployed non-uniform randomly in the macro cell area which are
apt to be near the hotspot areas of the day. Each small cell BS has three sectors and
employs the frequency reuse with reuse factor 3. Users are dropped into the evaluated
network coverage randomly. The average value of traffic demand of each user is
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 54
assumed to be a 1000 times higher than that of the current traffic, which means 62
Mbps. The remaining simulation parameters can be found in the following Table 3.2.2.
Table 3.2.2 - Simulation parameters. Parameter Value
Bandwidth
(Macro / 60 GHz) 10 MHz / 2 GHz
Number of macro cells 7 (1 evaluate, 6 interference)
Number of macro sectors 3
Number of edge cloud BSs
(per 1 macro cell) 15 (5 per macro sector)
Number of UEs
(per 1 macro cell) time varying
Number of BS antennas
(Macro / 60 GHz) 4 / 8
Number of UE antennas
(Macro / 60 GHz) 2 / 1
Macro ISD 500 m
BS antenna height
(Macro / 60 GHz) 25 m / 4 m
UE antenna height 1.5 m
Tx power
(Macro / 60 GHz) 43dBm / 19dBm
Additional consumption power of BS
(Macro / 60 GHz)
ON: 835W / 60W
OFF: 19W / 2W
Pathloss model
(Macro / 60 GHz)
]dB[]km[log6.371.128 10 d/
ref
ref
10
ref
ref
10
, log2002.82
, log6.2302.82
ddd
d
ddd
d
dPL
Channel model
(Macro / 60 GHz)
3GPP SCME urban macro scenario/
Measurement-based Rician fading model
Noise power density -174 dBm/Hz
Average traffic demand 62 Mbps
Numerical results
The effectiveness of the proposed algorithm can be observed in Fig. 3.2-5, which
depicts the optimization results (BS ON/OFF status) of 60 GHz small cell BSs for two
representative scenarios of low traffic load at 3AM and peak hour at 3PM. As seen in
the evaluated macro cell area of the figure, more small cell BSs are deactivated at 3AM
rather than that at 3PM.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 55
Fig. 3.2-5 - ON/OFF status (left: 3AM, right: 3PM).
The optimization problem to reduce the system’s energy consumption via deactivating
a certain set of small cells as shown in the above figure might violate the UE’s latency
constraints. Fig. 3.2-6 reveals that such violation might be mitigated by applying (3-
5), where the vertical axis evaluates the satisfaction ratio of user traffic demand as a
KPI of the system, defined as the percentage of UEs with the achievable user rates are
higher than their traffic demands. The black line in the figure show the results of a
homogeneous network (HomoNet: a network architecture of only macro cells without
small cells) for reference. As seen in the figure, the performance of this KPI can be
roughly ranked in ascending order as follows i.e. HomoNet, 60 GHz HetNet (3-4), 60
GHz HetNet (3-4 and 3-5). With only macro BS, conventional HomoNet is obviously
not able to support the future 1000x traffic demand where only roughly 20% of users
are satisfied. mmWave HetNet via merely maximizing (3-4) has unsatisfactory
performance since it only maximizes the system’s energy efficiency without
considering UE’s traffic demand latency tolerance. On the other hand, our proposed
algorithm via furthermore minimizing (3-5) can support UE’s satisfaction at most of
the time.
Fig. 3.2-6 – Satisfaction ratio performance.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 56
4 Resilient design and detection of network criticalities
In this section, we first introduce an analysis for the resilient design of Mobile edge
computing against mmWave blocking events based on block erasure channel coding.
Then, we presents a method to control mmWave meshed backhaul for efficient
operation of mmWave edge cloud overlay HetNet. One main feature of this algorithm
is backhauling route multiplexing for overloaded mmWave edge cloud base stations
(SC-BSs).
4.1 Robust design based on multi-link communications and block erasure
coding against blocking
One of the major drawbacks of mmWave communications is that they are prone to
blocking events due to human body and obstacles [AC13], [SMM11]. In this section,
we propose away to compensate blocking effects, based on multi-link communications
between a UE and the edge cloud and on error-correcting codes for block-erasure
channels.
4.1.1 Overview of the contributions
In the whole Section 4.1, we focus on latency-constrained uplink communications
from UEs to APs of the edge cloud. In subsection 4.1.2, we briefly introduce the
strategy of exploiting simultaneous multi-link communications to minimize the uplink
information transmission costs at the UE’s side. Next, in subsection 4.1.3, we combine
multi-link communications with error-correcting techniques to counteract the effect of
blocking events.
In general, we can differentiate between long-term blocking events, whose duration is
almost as long as the uplink transmission time of the offloading procedure (or even
more, up to a few seconds [Mac17]), and short-term blocking events that instead last
much less. The latter can be caused, for example, by a bicycle or a car rapidly crossing
the communication path between the UE and an AP.
When short-term blocking happens, a mmWave channel suffers from a high
attenuation that temporarily decreases the achievable rate to almost 0. Substantially, a
mmWave link assumes an “on/off” behavior depending on the absence or presence of
a physical obstacle interrupting the communication path. Thus, brief blocking events
essentially make communication intermittent. To counteract this effect, we presented
in [D2.1] an approach that was first introduced in [BCM17], [BCMC17]. This idea is
based on overprovisioning of radio resources to guarantee an actual average
information transmission rate that takes into account blocking probabilities and
compensates possible information losses.
In Section 4.1.3, we deal with long-term (or “slow”) blocking, whose duration has at
least the same order of magnitude as the latency constraint. For this kind of blocking
events, we propose a coding strategy over multiple links to increase the probability of
recovering the information lost over blocked channels, thanks to the other received
blocks. We define and analyze the properties of the asymmetric block-erasure channel
and we discuss the tradeoff between the benefit of error-correcting codes for this kind
of channel and the related energy costs due to the transmission of redundant bits.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 57
4.1.2 Multi-link communications
We consider uplink communications from a UE to the MEH and we suppose that the
UE needs to send 𝑛𝑏 total bits to a MEC AP within a fixed maximum delay, i.e. with
an imposed latency constraint. In order to do this, the UE also aims at minimizing its
transmit power to save as much energy as possible. If we call 𝑅 the information
transmission rate employed by the UE and 𝐵 the available bandwidth, then the
communication delay can be written as 𝐷 = 𝑛𝑏/𝐵𝑅. Calling the latency constraint 𝐿,
the UE looks for the minimum transmit power 𝑝 that guarantees that 𝐷 ≤ 𝐿. Notice
that the latter is equivalent to impose that
𝑅 ≥𝑛𝑏
𝐵𝐿=: 𝑅𝑚𝑖𝑛.
That is, the latency constraint can be translated into a constraint on the minimum
acceptable information transmission rate. If we write the uplink UE-AP channel
capacity as 𝐶 = 𝐵 log2(1 + 𝑎𝑝), for some positive constant 𝑎 [dMC+18], and if we
assume that the modulation and coding scheme is properly chosen to achieve 𝐶, then
it is straightforward to show that the goal of minimizing the transmit power 𝑝 while
meeting the rate/latency constraint is achieved by setting 𝑅 = 𝑅𝑚𝑖𝑛 and
𝑝𝑚𝑖𝑛 =2𝑅𝑚𝑖𝑛 − 1
𝑎.
If the communication between the UE and the MEH happens in a single link, the above
transmit power 𝑝𝑚𝑖𝑛 cannot be further decreased without violating the latency
constraint. Nonetheless, an additional reduction is possible by introducing a new
degree of freedom in our scenario. In [dMC18], an analysis is proposed on the
convenience of exploiting simultaneous multi-link communications between the UE
and the MEH. From now on, when we speak of multi-link communications, we mean
that the UE can send different information to different APs via different mmWave links
and over all the available UE-AP links simultaneously. This requires the use of digital
beamforming. We also assume that all the APs can communicate among themselves
with negligible latency through an ideal high-capacity backhaul. In this way, one AP
endowed with a MEH can collect all the information sent by the UE within a negligible
delay. This scenario is consistent with cloud-RAN architecture, where the APs are
simple RRHs and the information is processed in the cloud. We represented this
architecture in Fig. 4.1-1 for the case of two links.
Fig. 4.1-1 - Two-link communication between a UE and the edge cloud.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 58
A detailed description of multi-beam technologies in mmWave communications with
fixed subarray and full multi-beam antennas is provided in [Hon17]. Although therein
the perspective lies on the AP's side, [dMC18] considers the case of UEs capable of
exploiting these (or equivalent) technologies. One of the aims of [dMC18] is to show
that multi-link communications can be convenient, because they allow to reduce the
overall transmit power with respect to the single-link case under the same latency
constraint (see also [D2.1] and [D4.1])). Let us recall the following definition, which
will be used in the next subsection:
Definition:
Consider 𝑁 communication links between a UE and 𝑁 APs to communicate a MEH;
let us call 𝐶𝑖 = 𝐵 log2(1 + 𝑎𝑖𝑝𝑖) the capacity of the 𝑖 -th link, with 𝑎1 ≥ 𝑎2 ≥ ⋯ ≥𝑎𝑁 > 0. We denote by 𝑁∗, with 1 ≤ 𝑁∗ ≤ 𝑁, the number of links that minimizes the
transmit power of latency-constrained communication. In other words, the minimum
transmit power is achieved by multi-link communication over the best 𝑁∗ links (out
of 𝑁).
[dMC+18] provides the detailed characterization of 𝑁∗ and the related optimal
strategy to split the total 𝑛𝑏 information bits over 𝑁∗ links; for the purposes of this
document, we just need to know that 𝑁∗ exists and can be explicitly and precisely
computed whenever the 𝑎𝑖 and 𝑅𝑚𝑖𝑛 are given. When we talk about the “power-
optimal communication strategy” in the following, we mean the multi-link
communication strategy over 𝑁∗ links that minimizes the total transmit power under
the latency constraint corresponding to 𝑅𝑚𝑖𝑛.
The scenario and the principles stated above are used for the analysis proposed in the
next subsection.
4.1.3 Block-erasure-correcting codes for robust multi-link communications
In deliverable D2.1 [D2.1], we introduced multi-link communication strategies to
tackle short term blocking events, i.e. blocking events that last much less than the
duration of the application. Conversely, when blocking events last longer,
overprovisioning is not effective anymore and other solutions need to be explored.
This can happen when obstacles slowly cross the line-of-sight path between the UE
and the AP and obstruct the link for “long” time intervals, even as long as a few seconds
[Mac17] or more. When this happens, waiting for the channel to be open again takes
too much time. One solution may be to complete the offloading procedure by restarting
it over other links, but this takes time and typically violates the latency constraint. To
overcome this problem, in this section we define and analyze a theoretical framework
to combine error-correcting-coding techniques with multi-link mmWave
communications to simultaneously perform computation offloading and contrast long-
term blocking events that start after the beginning of the offloading procedure, without
the need for retransmissions. Let us suppose to apply the power-wise optimal multi-
link communication strategy proposed in Section 4.1.2 and in [dMC18] over 𝑁 links,
transmitting 𝑛𝑖 bits over the 𝑖 -th link at rate 𝑅𝑖 , with ∑ 𝑛𝑖𝑁𝑖=1 = 𝑛𝑐 and 𝑛1 ≥ 𝑛2 ≥
⋯ ≥ 𝑛𝑁 . The 𝑖 -th channel is the communication link between the UE and its 𝑖 -th
closest AP, situated at distance 𝑑𝑖. Let us call 𝑃𝑖 the blocking probability of the 𝑖-th
link and let us assume that the distances are ordered in decreasing sense, so that 𝑃1 ≥
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 59
𝑃2 ≥ ⋯ ≥ 𝑃𝑁. This is a realistic assumption, because longer line-of-sight paths have a
higher chance to be blocked. Consider, for simplicity, that blocking events are mutually
independent on any two links. In this case, the problem of offloading 𝑛𝑐 bits over
𝑁 links without losing information is equivalent to the problem of transmitting a word
of length 𝑛𝑐 bits over an asymmetric block-erasure channel, for which the 𝑛𝑐 bits are
split into 𝑁 blocks of length 𝑛𝑖 bits and each block has erasure probability 𝑃𝑖 .
Whenever one link is blocked, we suppose that all the bits of the corresponding block
are lost (erased) and this happens independently from block to block. This model is
our generalization [dMC18] of the block-erasure channel described in [Fab06]. We call
it “asymmetric” because we allow all the 𝑛𝑖’s and the 𝑃𝑖’s to be different from each
other. Our idea is to apply block-erasure-coding to multi-link communications to
counteract blocking effects and we start by generalizing and enriching the results of
[Fab06].
Formally, let 𝒞 ⊆ {0,1}𝑛𝑐 be an error-correcting code for the asymmetric block-
erasure channel of rate 𝑅𝒞 = log2|𝒞| 𝑛𝑐⁄ . 𝑛𝑐 is the total number of (coded) transmitted
bits, 𝑛𝑏 denotes the number of uncoded information bits, and 𝑅𝒞 = 𝑛𝑏/𝑛𝑐 is the
coding rate in case of linear codes. The codewords of 𝒞 are written as 𝒙 =(𝒙1|𝒙2| ⋯ |𝒙𝑁), where 𝒙𝑖 is the block of 𝑛𝑖 coordinates transmitted over the 𝑖-th link.
Let us define an erasure pattern 𝒆 as the vector 𝒆 = (𝑒1, 𝑒2, … , 𝑒𝑁) ∈ {0,1}𝑁 such that
𝑒𝑖 = 1 if the 𝑖-th block of a codeword is erased (i.e. if the 𝑖-th UE-AP link is blocked)
and 𝑒𝑖 = 0 otherwise. Thus, 𝒫{𝑒𝑖 = 1} = 𝑃𝑖. For a given 𝒆, we define
𝒞(𝒆) = {𝒙 ∈ 𝓒 ∶ if 𝑒𝑖 = 0 then 𝒙𝑖 = 0 ∀𝑖 = 1, … , 𝑁}.
𝒞(𝒆) is the set of codewords of 𝒞 whose non-zero blocks are only among the erased
blocks identified by 𝒆 . If 𝒞 is a linear code, then we can suppose without loss of
generality that the asymmetric block-erasure channel input is the all-zero codeword 0.
For every given erasure pattern 𝒆 , all the codewords of 𝒞(𝒆) will give the same
channel output as 0. Assuming that a maximum likelihood decoder does not give
priority to any of the codewords of 𝒞(𝒆), the word error probability caused by the
erasure pattern 𝒆 is
𝑃𝑒𝑤(𝒆) = 1 −
1
|𝒞(𝒆)|.
In particular, if 𝒞(𝒆) = {𝟎} and |𝒞(𝒆)| = 1 , the decoder is capable of correctly
decoding the erasure pattern 𝒆. Therefore, for linear codes, the word error probability
associated with the 𝑃𝑖′𝑠 equals
𝑃𝑒𝑤 = 𝑃𝑒
𝑤(𝑃1, … , 𝑃𝑁) ≔ 𝔼𝒆[𝑃𝑒𝑤(𝒆)] = 𝔼𝒆 [1 −
1
|𝒞(𝒆)|],
where the expected value is computed with respect to the distribution of the erasure
pattern.
We give the following definition of diversity: the block-diversity of a code 𝒞 is defined
as
𝛿 = min𝒙,𝒚∈𝒞∶𝒙≠𝒚
|{𝑖 ∈ {1,2, … , 𝑁} ∶ 𝒙𝑖 ≠ 𝒚𝑖}|.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 60
Notice that for every erasure pattern 𝒆 such that 𝛿 > ∑ 𝑒𝑖𝑁𝑖=1 , there will be no ML-
decoding errors. Therefore, we are interested in designing codes with the biggest
diversity possible. It is clear that, in general, 𝛿 ≤ 𝑁 and we say that a code has full
diversity if 𝛿 = 𝑁 . An upper bound for 𝛿 is given by our generalization of the
Singleton bound defined in [Fab06] for the case where all blocks have the same length.
In our more general setup, we have:
Theorem (Singleton bound [dMC18]):
Let 0 < 𝑅𝒞 ≤ 1 and let ℓ ∈ {1, … , 𝑁} be the only integer such that
∑ 𝑛𝑖
𝑁
1=ℓ+1
< 𝑛𝑐𝑅𝒞 ≤ ∑ 𝑛𝑖
𝑁
𝑖=ℓ
.
Let us call 𝑀 = 1
𝑁−ℓ+1 ∑ 𝑛𝑖
𝑁𝑖=ℓ the average length of the last 𝑁 − ℓ + 1 blocks of a
codeword. Then,
𝛿 ≤ ⌊1 + 𝑁 −𝑛𝑐𝑅𝒞
𝑀⌋ =: 𝛿𝑆𝐵 .
Now, let us define the outage probability as the probability that, due to blocking events,
the received number of bits is less than 𝑛𝑏 = 𝑛𝑐𝑅𝒞 (the number of information bits):
𝑃𝑜𝑢𝑡 = 𝒫 {∑(1 − 𝑒𝑖)𝑛𝑖 < 𝑛𝑐𝑅𝒞
𝑁
𝑖=1
}.
In general, in case of outage, correct decoding is impossible, regardless of the goodness
of the code. Hence, 𝑃𝑒𝑤 ≥ 𝑃𝑜𝑢𝑡 and we are interested in designing coding techniques
that minimize 𝑃𝑜𝑢𝑡, in order to aim at as low 𝑃𝑒𝑤 as possible. The following upper and
lower bounds for the outage probability can be proved:
Theorem [dMC18]:
Let 0 < 𝑅𝒞 ≤ 1, let ℓ ∈ {1, … , 𝑁} be the only integer such that
∑ 𝑛𝑖
𝑁
𝑖=ℓ+1
< 𝑛𝑐𝑅𝒞 ≤ ∑ 𝑛𝑖
𝑁
𝑖=ℓ
and, analogously, let 𝑗 ∈ {0, … , 𝑁 − 1} be the only integer such that
∑ 𝑛𝑖
𝑗
𝑖=1
< 𝑛𝑐𝑅𝒞 ≤ ∑ 𝑛𝑖
𝑗+1
𝑖=1
.
The outage probability is bounded as follows:
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 61
∑ (𝑁
𝑢) ∏ 𝑃𝑖
𝑁−𝑢
𝑖=1
∏ (1 − 𝑃𝑖)
𝑁
𝑖=𝑁−𝑢+1
𝑗
𝑢=0
≤ 𝑃𝑜𝑢𝑡 ≤ ∑ (𝑁
𝑢) ∏(1 − 𝑃𝑖)
𝑢
𝑖=1
∏ 𝑃𝑖
𝑁
𝑖=𝑢+1
𝑁−ℓ
𝑢=0
.
Several models exist that quantify the blocking probability of mmWave signals
[TBH16], [Gap16], [ABK17], [QHW17], [GDC17]. Let us recall the model proposed
in Corollary 1.1 of [Bai14], that we will exploit for the simulation results presented in
the following: in the bidimensional space, obstacles are assumed to be rectangles with
random length 𝑋, width 𝑊, and centers randomly distributed according to a Poisson
point process with density 𝜇 . Then, the probability that the line-of-sight
communication path between the UE and an AP at distance 𝑑 is not obstructed is:
𝑃𝑜𝑛(𝜇, 𝑑) = exp( − 𝛽𝑑 − 𝑞),
where 𝛽 = 2𝜇𝜋−1(𝔼[𝑊] + 𝔼[𝑋]) and 𝑞 = 𝜇𝔼[𝑊]𝔼[𝑋].
Fig. 4.1-2 Average outage probability as a function of the density of obstacles,
for different values of 𝑹𝓒.
In Fig. 4.1-2, we show the behavior of the outage probability as a function of the
obstacle density 𝜇 . This result is obtained with the blocking probability model
described above, with 𝔼[𝑋] = 𝔼[𝑊] = 2 m. The outage probability is computed by
exhaustive evaluation of the outage probability for 𝑅𝑚𝑖𝑛 = 8 bit/s/Hz and for all
possible erasure patterns 𝒆 ; the outage probability is averaged over random
realizations of a deployment with 𝑁 = 15 APs randomly distributed in a square region
of size 300 m. For every deployment and for every fixed 𝑅𝒞 , the power-optimal
number of links used for offloading is chosen as intended in Section 4.1.2, with
𝑎𝑖 = 𝐺𝑅𝐺𝑇 (𝜆𝑤
4𝜋𝜎𝑛𝑑𝑖)
2
,
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 62
where 𝐺𝑅 = 128 and 𝐺𝑇 = 32 are the antenna gain at the receiver and transmitter side,
𝜆𝑤 = 5 mm is the signal wavelength, 𝜎𝑛 = −82,96 dBm is the Gaussian noise
standard deviation, and 𝑑𝑖 is the distance between the UE and the 𝑖-th closest AP. The
values of 𝔼[𝑋] , 𝔼[𝑊] , and 𝑅𝑚𝑖𝑛 will remain constant for all the simulation results,
unless stated otherwise. As expected, the outage probability decreases with 𝑅𝒞 and
grows with 𝜇.
Fig. 4.1-3 is obtained with the same simulation parameters of Fig. 4.1-2, but its goal
is to show the maximum possible coding rate necessary to maintain the outage
probability smaller than a fixed value. As the intuition suggests, 𝑅𝒞 needs to decrease
when 𝜇 increases, if we want to guarantee a bounded outage probability.
Fig. 4.1-3 The maximum allowed coding rate needed to guarantee that the
outage probability is smaller than a given fixed value.
To code or not to code?
This subsection addresses the following question: assuming that optimal codes can be
designed for the asymmetric block-erasure channel, whose word error probability
achieves the outage probability, in what circumstances are they worth to be used for
power- and latency-constrained computation offloading? Some considerations and
numerical simulations are provided in the sequel.
Employing a code of rate 𝑅𝒞 to fight blocking over 𝑁 links implies an increase in the
number of transmitted bits of a factor 𝑅𝒞−1 : if 𝑛𝑏 information bits are sent in the
uncoded case, they become 𝑛𝑐 = 𝑛𝑏𝑅𝒞−1 ≥ 𝑛𝑏 after encoding. Since we are
considering communication scenarios with an imposed latency constraint that cannot
be violated, this means that any block-erasure-correcting technique entails the
transmission of more bits with respect to an uncoded transmission within the same
amount of fixed time. This means that we need to increase our spectral efficiency and,
consequently, that the power cost of a coded transmission strategy is higher. This also
implies that the power-wise optimal number of links will be in general higher. When
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 63
the error-correcting code is well-designed, this setup achieves the main goal of
allowing the loss of information on some links (due to long-term blocking events),
without compromising the offloading procedure. However, the need to transmit more
bits clearly yields a cost in terms of transmission power that needs to be taken into
account.
Now, in the uncoded scenario, the outage probability equals the probability that at least
one link is blocked and the information sent over it is lost. Therefore, over 𝑁 channels,
the outage probability of the uncoded transmission scheme is 𝑃𝑜𝑢𝑡𝑢𝑛𝑐(𝑁) = 1 −
∏ 1 − 𝑃𝑖𝑁𝑖=1 . Notice that 𝑃𝑜𝑢𝑡
𝑢𝑛𝑐(𝑁) is a strictly increasing function of 𝑁, because for
every 𝑖,
𝑃𝑜𝑢𝑡𝑢𝑛𝑐(𝑖) > 𝑃𝑜𝑢𝑡
𝑢𝑛𝑐(𝑖 − 1) ⇔ ∏ 1 − 𝑃𝑗
𝑖−1
𝑗=1
> ∏ 1 − 𝑃𝑗
𝑖
𝑗=1
⇔ 1 > 1 − 𝑃𝑗 ,
and the latter is always true. Hence, when we restrict ourselves to the uncoded
transmission scheme, we face two completely opposite requirements: the necessity to
keep low (ideally to 1) the number of links to control the outage probability and the
need for increasing it (up to 𝑁∗) to minimize the transmit power. We will show through
numerical results in what terms coding for the block-erasure channel provides
beneficial compromises between the two previous contrasting requisites. In this
perspective, we claim that a fair assessment of the advantages of error-correcting codes
in this scenario needs to consider the tradeoff between transmit power consumption
and achievable outage probability, rather than focusing on each of these two separately.
Fig. 4.1- 4 shows the average transmit power as a function of the density of obstacles,
when the outage probability is constrained below a maximum value (𝑃𝑜𝑢𝑡 ≤ 0.05).
The results are obtained in a scenario with 15 APs deployed in a square region of size
200 m around the UE, where the obstacles' average dimensions are 𝔼[𝑊] = 1 m and
𝔼[𝑋] = 2 m. First of all, notice that if we rely on the uncoded transmission strategy,
the upper bound on the outage probability can be guaranteed only for obstacle densities
𝜇 not much bigger than 175/km2. For higher densities, there always exist deployments
in the considered region such that 𝑃𝑜𝑢𝑡𝑢𝑛𝑐(𝑁) ≥ 𝑃𝑜𝑢𝑡
𝑢𝑛𝑐(1) = 𝑃1 > 0.05. This is the
reason why the red curve in Fig. 4.1- 4 is plotted exclusively for 𝜇 ≤ 175. The figure
depicts the comparison between the power cost of the uncoded and coded transmission
strategies as a function of 𝜇 and averaged over random deployments of the 15 APs.
Recalling the definition of 𝑁∗ given above, the number of links 𝑁𝑢𝑛𝑐 used for uncoded
multi-link offloading is computed for each instance of the AP deployment as:
𝑁𝑢𝑛𝑐 = max{𝑁 ∈ {1, … , 𝑁∗} ∶ 𝑃𝑜𝑢𝑡 ≤ 0.05} ≤ 𝑁∗.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 64
Fig. 4.1- 4 - Average transmit power in the uncoded and coded case under the
constraint 𝑷𝒐𝒖𝒕 ≤ 𝟎. 𝟎𝟓.
For the coded scheme, instead, the coding rate 𝑅𝒞 was chosen as the maximum that
guarantees 𝑃𝑜𝑢𝑡 ≤ 0.05 , obtained by exhaustive research. Then, the corresponding
number of channels for multi-link offloading was computed according to the criterion
of transmit power minimization, with fixed total transmission rate 𝑅𝑚𝑖𝑛𝑅𝒞−1 and
𝑅𝑚𝑖𝑛 = 8 or 16. The picture clearly shows that well-designed error-correcting codes
may enable offloading in scenarios where the obstacle density makes the outage
probability uncontrollable for the uncoded communication strategy. Moreover, for
“medium” obstacle densities (75 ≤ 𝜇 ≤ 175 ), recurring to error-correcting codes
yields considerable gains in the transmit power for 𝑅𝑚𝑖𝑛 = 16 . Finally, the figure
confirms that in contexts with “few” obstacles (low 𝜇 ), a coded communication
scheme may not be needed, because the outage probability remains bounded and the
power gain provided by the code transmission scheme is reduced.
Using the same main simulation parameters of Fig. 4.1-5, Fig. 4.1-5(a) shows that
error-correcting codes may also be exploited to fully outperform the best possible
outage probability achievable with uncoded transmissions: the latter is obtained by
exclusively transmitting over the best available link and is represented by the constant
blue lines in the figure (averaged over different random AP deployments and for a few
different obstacle densities in an area of 300 m × 300 m). Choosing a small enough
coding rate 𝑅𝒞 allows to both obtain better average outage probabilities and to reduce
the average transmit power, as shown by the combination of Fig. 4.1-5 (a) and Fig.
4.1-5 (b). For instance, an optimal code with 𝑅𝒞 = 0.5 would allow to achieve better
outage probabilities than any uncoded transmission for each of the proposed obstacle
densities and, at the same time, reduce by 5 dBm the average transmit power with
respect to the uncoded strategy that minimizes the outage probability.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 65
Fig. 4.1-5 Average outage probability (a) and average transmit power (b) as
functions of the coding rate 𝑹𝓒 for different densities of obstacles.
4.2 Multi-route multiplexing on mmWave mesh backhauling against
overloaded edge cloud
4.2.1 System architecture
In such an environment of dynamic crowd scenario, network densification with many
number of mmWave small cells overlaid on the current LTE cells is effective to
accommodate traffic in peak hours. However, many number of small cells leads to the
problem of high CAPEX and OPEX. One solution to relax the problem is to use
mmWave meshed network for the backhauling of small cells since CAPEX can be
reduced by avoiding deployment cost of wired backhaul. Furthermore, OPEX can also
be reduced by introducing dynamic ON/OFF and flexible path creation in the backhaul
network in accordance with the time-variant and spatially non-uniform traffic. Such
flexible control of the backhaul network is enabled by Software Defined Network
(SDN) technology using out-band control interface over the LTE. Here, mmWave
meshed backhaul with SDN comes into place as one suitable candidate for dense urban
scenarios owing to its ultra-wide bandwidth and deployment flexibility with low cost.
This section presents our proposed method to control mmWave meshed backhaul for
efficient operation of mmWave small cell overlay HetNet. One main feature of our
algorithm is backhauling route multiplexing for overloaded mmWave small cell base
stations (SC-BSs). The other feature is the ON/OFF switching control of wireless
interfaces in less loaded spot. Considering practical user distribution modelled from
realistic measurement data, radio backhaul resources should be concentrated on
overloaded mmWave SC-BSs. Inversely, less loaded mmWave SC-BSs should be
deactivated for saving power.
We employ mmWave overlay HetNet shown in Fig. 4.2-1 as a network topology. In
the mmWave overlay HetNet, LTE is assumed to manage the C-plane information, i.e.
user’s location, movement, traffic demand and also dynamic configuration of wireless
backhaul. The LTE macro BS plays a role of mmWave gateway (GW), which is the
only information source of the whole network.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 66
Fig. 4.2-1 - mmWave meshed network overlaid on a macro cell.
An example of mmWave meshed network to be used in the dense urban scenario is
also depicted in the above figure. Here, mmWave SC-BSs are overlaid on a LTE macro
cell. The SC-BS has integrated backhaul and access of multiple sectors in both access
and backhaul interfaces. A set of LTE macro BS and mmWave SC-BSs forms a -RAN
or micro operator for a target environment e.g. stadium. In our mmWave meshed
network, GW can be connected to the mmWave SC-BS either directly or indirectly
through relay scheme. This latter scheme can perform backhauling of adaptive
topology, and also conduct backhauling route multiplexing toward a dense traffic spot.
Another advantage is that path loss attenuation can be compensated by amplification
per relay. For stable communications and ease of analysis, relay scheme is only able
to be formed among only links which can achieve maximum data rate of IEEE
802.11ad standard to guarantee a highest homogeneous backhauling rate.
4.2.2 Optimization problem
The prominent objective of the traffic & energy management algorithm is to reduce
energy consumption of mmWave meshed network by switching off as many mmWave
SC-BSs as possible in an area while satisfying users’ traffic demands. As it is hard to
optimize ON/OFF status of mmWave SC-BSs and backhaul paths all at once, the
algorithm involves three steps.
The first step (i):
The initial ON/OFF status of SC-BSs is determined based on the traffic demands per
SC-BS. This determines tentative ON/OFF status of each SC-BS considering multi-
RAT selectivity of microwave LTE and mmWave SC-BS and the goal is to reduce the
total power consumption of mmWave network as much as possible. In order to
minimize the total power consumption, LTE should serve as many users as possible
within its available bandwidth and less loaded SC-BSs should be set OFF. As it
is complicated to consider each user individually, all SC-BSs are activated at first and
all users are served by their nearest SC-BS. Then the i-th SC-BS has an aggregated
traffic demand . If can be instead served by macro LTE, LTE needs to allocate
some bandwidth given by Shannon’s capacity as follows.
BLTE
Ti Ti
bi
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 67
where is the approximated SINR (Signal to Interference plus Noise power Ratio)
of signals from LTE macro BS to the i-th SC-BS considering only path loss attenuation.
Therefore, in order to determine tentative ON/OFF status of SC-BS, we have only to
determine which system of LTE or mmWave network should accommodate . When
we define as a state that users around the i-th SC-BS are accommodated by the
k-th sector of LTE macro BS, the problem to be solved is as follows.
where expresses the number of SC-BSs included in . As a result, if is
accommodated by LTE, the corresponding users around the i-th SC-BS will be
accommodated by LTE, and the i-th SC-BS can be set OFF to reduce power
consumption. If the i-th SC-BS is set ON, all the 3 sectors for the i-th SC-BS’s access
structure will be activated regardless of the number of users in the coverage of the i-th
SC-BS and the user location.
The second step (ii):
Initial paths of backhaul network are created to minimize power consumption.
mmWave backhaul links are formed among SC-BSs that are set ON in step (i) to satisfy
user’s traffic demand. In other words, appropriate backhauling routes from any sector
of GW to SC-BS are determined. Using load balancing approach, we determine such
routes by solving the following linear programming problem:
When the number of SC-BS and the number of sectors of GW are denoted by and
respectively, total number of flow equals the product of these.
means the data amount to be transmitted from any sector of GW to any SC-BS,
weights the number of relay hop against x, is the summation of
traffic load accommodated by each sector of GW, is the summation of
traffic supplied to each SC-BS, expresses the ON/OFF state. is
the aggregated traffic demand of each SC-BS, is a mapping matrix
bi =Ti
log2(1+g i )
g i
Ti
iÎGk
|Gk | Gk Ti
NS
NAPNV xÎ RNV
f Î RNV tS Î RNS
tAP Î RNAP
aÎ RNAP TD Î RNAP
WAP Î RNV´NAP
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 68
between and x, is a mapping matrix between and x. The
constraint [A] means the capacity of each sector of GW, [B] ensures satisfaction of
user’s request, [C] assures that the value of traffic is not negative. We then get the
optimal combination of transmitter sector of GW and receiver SC-BS from solving for
x.
The third step (iii):
This step re-activates remaining SC-BSs in an energy efficient manner so as to transfer
the traffic for the isolated SC-BSs [GIA18-2]. Control signaling to manage ON/OFF
status of SC-BSs and to create physical paths between them are transmitted over the
LTE as an out-band control plane. As such, a dynamic and energy efficient mmWave
meshed network is formed.
4.2.3 Numerical analysis
This section shows several examples of simulation analysis for mmWave meshed
networks controlled by the aforementioned algorithm. In this numerical analysis,
several macro cells with ISD of 500m are assumed to be deployed within the 2000m
square areas in Sect. 3.2 and one macro cell is selected as the evaluation cell. Other
simulation parameters are shown in Table 4.2.1.
Table 4.2.1 - Simulation parameters
Parameter LTE mmWave edge cloud
Bandwidth 10MHz 2×2.16GHz
Carrier freq. 2.0GHz 60GHz
Antenna gain 17dBi 26dBi
Antenna height 25m 4m/25m (SC-BS/GW)
Tx power 46dBm 10dBm
Beam pattern 3GPP IEEE802.11ad
Path loss 3GPP [REF]
# of BSs 1 90
Noise density 174dBm/Hz
Examples of the formed mmWave meshed networks are shown in Fig. 4.2-2. As there
are few users in the evaluation area at 3AM, only a few mmWave SC-BSs are activated.
In this case, since there are enough resource blocks in the LTE, most of the users are
connected to the macro BS, while users with very high traffic demand at the right-
bottom activate SC-BSs. On the other hand, at 3PM, a hotspot appears in the upper-
left zone. We can see some backhaul links formed from gateway to the hotspot,
showing the effectiveness of the traffic and energy management algorithm against the
locally intensive traffic. In other word, the proposed approach can alleviate overloaded
edge cloud via multi-route multiplexing mechanism over the mmWave meshed
backhaul.
tAPWS Î RNV´NS tS
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 69
(a) Formed mmWave meshed network at AM 3:00.
(b) Formed mmWave meshed network at PM 3:00.
Fig. 4.2-2 - Dynamic formation of mmWave meshed network.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 70
5 Relevance of the proposed algorithms with the project use cases
The previous sections describe in details the following novel algorithms, devised and
implemented by the 5G-MiEdge consortium under the WP3 activities:
a. Resource allocation for computation offloading;
b. Data prefetching;
c. Computational load distribution among MEHs;
d. Dynamic ON/OFF strategies;
e. Robust design analysis of mobile edge computing over mmWave links;
f. Multi-route multiplexing on mmWave mesh backhauling against overloaded
edge cloud.
Most of those algorithms have a general validity and are scalable enough to provide
advantages to several scenarios and different use cases, dealing with the synergy of
mmWave and MEC technologies. Nevertheless, some of the use cases defined by 5G-
MiEdge in previous deliverables, e.g. a preliminary version in D1.1 [D1.1] and a final
refined version in D 1.3 [D1.3], could benefit more than other use cases. In the
following a brief analysis of the impact of the proposed algorithm on the project use
cases is provided.
5.1.1 Omotenashi services
In Omotenashi services, data prefetching can be realized for foreign passengers who
arrive at the airport and can download their necessary information for their
business/travel trips right away after they get to the destination airport, as well as
download of entertaining contents. Of course, when there are no passengers at night,
the base station (e.g. signage) can be turned off for energy saving, thus the dynamic
ON/OFF algorithms described in section 3.2 also applies to this use case to reduce
OPEX due to energy consumption.
To summarize, the algorithms that can bring benefits to this use cases are:
b. Data prefetching;
d. Dynamic ON/OFF strategies.
5.1.2 Moving hotspot
Prefetching contents to be retrieved by users within low latency is important to
maintain a good quality of service. For this reason, the prefetching algorithm described
in section 2.2 well fits the moving hotspot use case. Moreover, multi-route
multiplexing over mmWave mesh backhauling can be applied for the moving hotspot
scenario as well in case they employ mmWave mesh backhauling. Concentrated
contents toward the moving MEH can be separated into several routes and distribute
to the MEH from multiple transmission points (BSs). At the same time, dynamic
ON/OFF strategies can help in reducing energy consumption by turning off BS when
unused.
To summarize, the algorithms that can bring benefits to this use cases are:
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 71
b. Data prefetching;
d. Dynamic ON/OFF strategies.
f. Multi-route multiplexing on mmWave mesh backhauling against overloaded edge
cloud.
5.1.3 2020 Tokyo Olympic
The stadium is a typical ultra-dense scenario where thousands of spectators want to
share at the same time contents and personal experiences. The challenging idea of
creating a unique user experience through AR/VR videos requires a very efficient
management of the computation resources of edge servers, in order to avoid large end-
to-end delays. For this reason, distributing computing among edge servers can enhance
the performance of the system and the quality of service perceived by the end users.
Other applicable algorithms are resource allocation strategies for computation
offloading, data prefetching to reduce download delay, and robust design over
mmWave links to guarantee service continuity.
To summarize, the algorithms that can bring benefits to this use cases are:
a. Resource allocation for computation offloading;
b. Data prefetching;
c. Computational load distribution among MEHs;
e. Robust design analysis of mobile edge computing over mmWave links.
5.1.4 Outdoor dynamic crowd
The dynamic crowd use case is the typical scenario where resource allocation
strategies for computation offloading defined in section 2 are important to efficiently
manage radio and computation resources in order to provide good quality of service to
the end users, since the limited computation resources of the edge cloud have to be
shared among users with different application requirements and channel conditions.
Computational load distribution is also important to provide low-latency services, and
robust design over mmWave links to guarantee service continuity. Moreover, the
dynamicity of the system (radio channels, computation task arrivals, mobility) has to
be handled via proper dynamic optimization, described in section 2 as well. Finally,
for energy efficiency purposes, it is paramount to be able to switch on and off also at
a certain frequency, the available resource, so to follow the dynamical variations of the
resource requests and to reduce the OPEX via energy saving.
To summarize, the algorithms that can bring benefits to this use cases are:
b. Data prefetching;
c. Computational load distribution among MEHs;
d. Dynamic ON/OFF strategies.
e. Robust design analysis of mobile edge computing over mmWave links;
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 72
5.1.5 Automated driving
Providing reliable communications to mmWave APs in presence of blocking events, is
a fundamental challenge and an important issue. For instance, a seamless service is
important for the automated driving scenario, due to the critical aspects of some
application such as, e.g., safety applications. Indeed, since our use case requires high
data rate (provided by mmWave communications) and low end-to-end latency (10 ms),
blocking events due to obstacles can be detrimental and have negative effects on the
system performance and the capability of maintaining the service.
For this reason, countermeasures such the ones described in section 4.1 can be
necessary for this purpose. Section 4.1is a very general analysis of the effect of a
possible block erasure channel coding strategy, enabled by multi-link communications.
The reliability of the service is enhanced with respect to a single link case prone to
blocking events. Computation offloading is another service that the automated driving
scenario can benefit of, as well as the multi-route multiplexing on mmWave mesh
backhauling against overloaded edge cloud.
To summarize, the algorithms that can bring benefits to this use cases are:
a. Resource allocation for computation offloading;
e. Robust design analysis of mobile edge computing over mmWave links;
In table 5-1, we summarize the mapping between the proposed algorithm and the 5G-
Miedge use cases.
Use case
Algorithm
Omotenashi
Service
Moving
hotspot
2020
Tokyo
Olympic
Outdoor
dynamic
crowd
Automated
driving
Resource allocation for
computation offloading ✓ ✓ ✓
Data prefetching ✓ ✓ ✓ ✓
Computational load
distribution among
MEHs
✓ ✓
Dynamic ON/OFF
strategies ✓ ✓ ✓
Robust design analysis
of mobile edge
computing over
mmWave links
✓ ✓ ✓
Multi-route
multiplexing on
mmWave mesh
backhauling against
overloaded edge cloud
✓
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 73
6 Summary
To summarize, in this deliverable we described the output of the activities related to
WP3, in particular regarding task 3.3: “User/application centric orchestration to realize
5G liquid edge cloud”. In particular, we developed novel algorithms for the following
purposes:
Resource allocation for computation offloading;
Data prefetching;
Computational load distribution among MEHs;
Dynamic ON/OFF strategies;
Robust design analysis of mobile edge computing over mmWave links;
Multi-route multiplexing on mmWave mesh backhauling against overloaded
edge cloud.
Then, the algorithms described in this deliverable aim at realizing an efficient
application centric resource management, taking into account the two aspects of 5G-
Miedge: mmWave communications for the radio access and Multi-Access Edge
Computing, which provide computation and storage resources at the edge of the
network. Indeed, radio, computation and storage aspects are seen as a holistic system
and are optimized to enable the edge cloud functionalities. Finally, the deliverable
describes a mapping between the proposed algorithms and the 5 use cases of 5G-
MiEdge. Indeed, although the algorithms are general and applicable in different
scenarios, they can be used to enable the use cases proposed by the project.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 74
7 References
[D1.1] 5G-MiEdge deliverable D1.1, “Use Cases and Scenario Definition”,
Available online at: http://5g-miedge.eu.
[D1.3] 5G-MiEdge deliverable D1.3, “System Architecture and Requirements”,
Available online at: http://5g-miedge.eu.
[D2.1] 5G-MiEdge deliverable D2.1, “Requirement and scenario definition for
mmWave access, antenna and area planning for mmWave edge cloud”,
Available online at: http://5g-miedge.eu.
[D4.1] 5G-MiEdge deliverable D1.1, “Performance evaluation of 5G MiEdge
based 5G cellular networks”, Available online at: http://5g-miedge.eu.
[MEC002] ETSI, “Multi-Access Edge Computing (MEC); Phase 2: use cases
and requirements,” October 2018.
[ABK17] J. G. Andrews, T. Bai, M. N. Kulkarni, A. Alkhateeb, A. K. Gupta, and
R. W. Heath, Jr., “Modeling and analyzing millimeter wave cellular
systems,” IEEE Trans. Commun., vol. 65, no. 1, pp. 403-430, Jan. 2017.
[AC13] M. Abouelseoud and G. Charlton, “The effect of human blockage on the
performance of millimeter-wave access link for outdoor coverage,” in
Proc. IEEE VTC Spring, Dresden, Germany, 2013, pp. 1-5.
[Bai14] T. Bai, R. Vaze, and R. W. Heath, Jr., “Analysis of blockage effects on
urban cellular networks,” IEEE Trans. Wireless Commun., vol. 13, no. 9,
pp. 5070-5083, Sep. 2014.
[BCM17] S. Barbarossa, E. Ceci, and M. Merluzzi, “Overbooking radio and
computation resources in mmW-mobile edge computing to reduce
vulnerability to channel intermittency,” in Proc. EuCNC, Oulu, Finland,
2017, pp. 1-5.
[BCMC17] S. Barbarossa, E. Ceci, M. Merluzzi, and E. Calvanese Strinati, “Enabling
effective mobile edge computing using millimeter wave links,” in Proc.
IEEE ICC, Paris, France, 2017, pp. 1-6.
[dMC18] N. di Pietro, M. Merluzzi, E. Calvanese Strinati, and S. Barbarossa,
“Resilient design of 5G mobile-edge computing over intermittent
mmWave links,” submitted to IEEE Trans. Mobile Comput., Dec. 2018.
[Fab06] A. Guillén i Fàbregas, “Coding in the block-erasure channel,” IEEE
Trans. Inf. Theory, vol. 52, no. 11, pp. 5116-5121, Nov. 2006.
[Gap16] M. Gapeyenko et al., “Analysis of human-body blockage in urban
millimeter-wave cellular communications,” in Proc. IEEE ICC, Kuala
Lumpur, Malaysia, 2016, pp. 1938-1883.
[GDC17] G. Ghatak, A. De Domenico, and M. Coupechoux, “Modeling and
analysis of hetnets with mm-wave multi-RAT small cells deployed along
roads,” in Proc. IEEE GLOBECOM, Singapore, 2017, pp. 1-7.
[Hon17] W. Hong et al., “Multibeam antenna technologies for 5G wireless
communications,” IEEE Trans. Antennas Propag., vol. 65, no. 12, pp.
6231-6249, Dec. 2017.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 75
[Mac17] R. MacCartney, T. S. Rappaport, and S. Rangan, “Rapid fading due to
human blockage in pedestrian crowds at 5G millimeter-wave
frequencies,” in Proc. IEEE GLOBECOM, Singapore, 2017, pp. 1-7.
[QHW17] Y. Qi, M. Hunukumbure, and Y. Wang, “Millimeter wave LOS coverage
enhancements with coordinated high-rise access points,” in Proc. IEEE
WCNC}, San Francisco, CA, USA, 2017, pp.~1558-2612.
[SMM11] S. Singh, R. Mudumbai, and U. Madhow, “Interference analysis for highly
directional 60-GHz mesh networks: The case for rethinking medium
access control,” IEEE/ACM Trans. Netw., vol. 19, no. 5, pp. 1513-1527,
Oct. 2011.
[TBH16] A. Thornburg, T. Bai, and R. W. Heath, “A tractable approach to coverage
and rate in cellular networks,” IEEE Trans. Signal Process., vol. 64, no.
15, pp. 4065-4079, Aug. 2016.
[Sar18] [1] S. Sardellitti, M. Merluzzi, and S. Barbarossa, “Optimal assignment of
mobile users to Multi-Access Edge Computing resources,” Proc. of
IEEE International Conference on Communications (ICC 2018), Kansas
City, USA
[Sar15] S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of radio
and computational resources for multicell mobile-edge computing,” IEEE
Trans. Signal Inf. Process. Net., vol. 1, no. 2, pp. 89–103, Jun. 2015.
[You17] C. You, K. Huang, H. Chae, and B. H. Kim, “Energy-efficient resource
allocation for mobile-edge computation offloading,” IEEE Trans. Wir.
Commun., vol. 16, no. 3, pp. 1397–1411, Mar. 2017.
[Zhao17] P. Zhao, H. Tian, C. Qin, and G. Nie, “Energy-saving offloading by
jointly allocating radio and computational resources for mobile edge
computing,” IEEE Access, vol. 5, pp. 11255–11268, 2017.
[Chen16] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation
offloading for mobile-edge cloud computing,” IEEE/ACM Trans. Net.,
vol. 24, no. 5, pp. 2795–2808, Oct. 2016.
[Sar14] S. Sardellitti, S. Barbarossa, and G. Scutari, “Distributed mobile cloud
computing: Joint optimization of radio and computational resources,” in
Proc. of 2014 IEEE Globecom Work. (GC Wkshps), Dec. 2014, pp.
1505–1510.
[Zhao15] T. Zhao, S. Zhou, X. Guo, Y. Zhao, and Z. Niu, “A cooperative
scheduling scheme of local cloud and Internet cloud for delay-aware
mobile cloud computing,” in Proc. of 2015 IEEE Globecom Work. (GC
Wkshps), Dec. 2015, pp. 1–6.
[Ge12] Y. Ge, Y. Zhang, Q. Qiu, and Y.-H. Lu, “A game theoretic resource
allocation for overall energy minimization in mobile cloud computing
system,” in Proc. of ACM/IEEE Int. Symp. Low Pow. Elec. Des., Jul.-
Aug. 2012, pp. 279–284.
[Li17] T. Li, C. S. Magurawalage, K. Wang, K. Xu, K. Yang, and H. Wang, “On
efficient offloading control in cloud radio access network with mobile
edge computing,” in Proc. of IEEE 37th Int. Conf. Dist. Comput. Syst.
(ICDCS), Jun. 2017, pp. 2258–2263.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 76
[Scu17] G. Scutari, F. Facchinei, and L. Lampariello, “Parallel and distributed
methods for constrained nonconvex optimization - Part I: Theory,” IEEE
Trans. Signal Process., vol. 65, no. 8, pp. 1929–1944, Apr. 2017.
[Zhang17] N. Zhang, Y. F. Liu, H. Farmanbar, T. H. Chang, M. Hong, and Z. Q.
Luo, “Network slicing for service-oriented networks under resource
constraints,” IEEE J. Sel. Areas Commun., vol. 35, no. 11, pp. 2512–
2521, Nov. 2017.
[Roth92] A. E. Roth and M. Sotomayor, “Two-sided matching,” Handbook of
game theory with economic applications, vol. 1, pp. 485–541, 1992.
[Saad14] W. Saad, Z. Han, R. Zheng, M. Debbah, and H. V. Poor, “A college
admissions game for uplink user association in wireless small cell
networks,” in Proc. of IEEE Conf. Comput. Commun. (INFOCOM
2014), Apr. 2014, pp. 1096–1104.
[Gale62] D. Gale and L. S. Shapley, “College admissions and the stability of
marriage,” The Amer. Math. Month., vol. 69, no. 1, pp. 9–15, 1962.
[Han17] Zhu Han, Yunan Gu, and Walid Saad, Matching Theory for Wireless
Networks, Springer Publish. Comp., Inc., 1st edition, 2017.
[Mao16] Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading
for mobile-edge computing with energy harvesting devices,” IEEE J. Sel.
Areas Commun., vol. 34, no. 12, pp. 3590–3605, Dec 2016.
[Mao17] Y. Mao, J. Zhang, S. H. Song, and K. B. Letaief, “Stochastic joint radio
and computational resource management for multi-user mobile edge
computing systems,” IEEE Trans. Wireless Commun., vol. 16, no. 9, pp.
5994–6009, Sept 2017.
[Mer19] [2] M. Merluzzi, P. Di Lorenzo, S. Barbarossa, and V. Frascolla, “Joint
Resource Allocation for Latency-Constrained Dynamic Computation
Offloading with MEC,” Submitted to IEEE WCNC 2019, Marrakech,
Morocco, April 2019
[Yang18] Y. Yang, S. Zhao, W. Zhang, Y. Chen, X. Luo, and J. Wang, “DEBTS:
Delay energy balanced task scheduling in homogeneous fog networks,”
IEEE Internet of Things Journal, vol. 5, no. 3, pp. 2094–2106, 2018.
[Sun17] Y. Sun, S. Zhou, and J. Xu, “EMM: Energy-aware mobility management
for mobile edge computing in ultra dense networks,” IEEE Journal on Sel.
Areas in Comm., vol. 35, no. 11, pp. 2637–2646, Nov 2017.
[Chen17] L. Chen-Feng, M. Bennis, and H.V. Poor, “Latency and reliability-aware
task offloading and resource allocation for mobile edge computing,” in
Proc. of 2017 IEEE Globecom Workshops (GC Wkshps), Singapore
2017, pp. 1–7.
[Chen18] L. Chen-Feng, M. Bennis, M. Debbah, and H.V. Poor, “Dynamic task
offloading and resource allocation for ultra-reliable low-latency edge
computing,” [Online]. Available: arXiv:1812.08076.
[Boyd04] S. Boyd and L. Vandenberghe, Convex optimization, Cambridge
university press, 2004.
[Lit11] John DC Little, “Little’s law as viewed on its 50th anniversary,”
Operations research, vol. 59, no. 3, pp. 536–549, 2011.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 77
[Nee10] Michael J. Neely, Stochastic Network Optimization with Application to
Communication and Queueing Systems, Morgan and Claypool
Publishers, 2010.
[Mer19-2] M. Merluzzi, P. Di Lorenzo, S. Barbarossa, and V. Frascolla, “Joint
resource allocation for latency-constrained dynamic mobile edge
computing,” Submitted to IEEE Transactions on Mobile Computing,
2019.
[Mer19-3] M. Merluzzi, P. Di Lorenzo, S. Barbarossa, , “Dynamic joint resource
allocation and user assignment in multi-access edge computing,” Proc. of
IEEE ICASSP 2019, Brighton, UK, May 2019.
[Sim18] Osvaldo Simeone, “A brief introduction to machine learning for
engineers,” Foundations and Trends in Signal Processing, vol. 12, no. 3-
4, pp. 200–431, 2018.
[OUEIS14] J. Oueis, E. Calvanese-Strinati, A. De Domenico, and S. Barbarossa, “On
the impact of backhaul network on distributed cloud computing,” in 2014
IEEE Wireless Commun. Netw. Conf. Workshops (WCNCW), Apr.
2014, pp. 12–17.
[SMIT12] M. Smit, M. Shtern, B. Simmons, and M. Litoiu, “Partitioning
applications for hybrid and federated clouds,” in Proc. of the 2012 Conf.
Center Advan. Stud. Collab. Reser. IBM Corp., 2012, pp. 27–41.
[WANG04] C. Wang and Z. Li, “Parametric analysis for adaptive computation
offloading,” in Proc. of the ACM SIGPLAN Conf. Program. Lang.
Design Implem., 2004, pp. 119–130.
[VERB13] T. Verbelen, T. Stevens, F. D. Turck, and B. Dhoedt, “Graph partitioning
algorithms for optimizing software deployment in mobile cloud
computing,” Fut. Gener. Comput. Syst., vol. 29, no. 2, pp. 451– 459,
2013.
[KHAL] S. Khalili and O. Simeone, “Inter-layer per-mobile optimization of cloud
mobile computing: a message-passing approach,” Trans. Emerg.
Telecom. Tech., vol. 27, no. 6, pp. 814–827.
[AZIM16] S. M. Azimi, O. Simeone, O. Sahin, and P. Popovski, “Ultrareliable cloud
mobile computing with service composition and superposition coding,”
in Proc. of Ann. Conf. Inf. Sci. Syst. (CISS), Mar. 2016, pp. 442–447.
[GARG11] S. K. Garg, S. K. Gopalaiyengar, and R. Buyya, “SLA-based resource
provisioning for heterogeneous workloads in a virtualized cloud
datacenter,” in Proc. of the 11th Int. Conf. Algor. Archit. Parallel Process.
- Volume Part I, ser. ICA3PP’11. Berlin, Heidelberg: Springer-Verlag,
2011, pp. 371–384.
[NATH10] R. Nathuji, A. Kansal, and A. Ghaffarkhah, “Q-clouds: Managing
performance interference effects for QoS-aware clouds,” in Proc. Of the
5th Eur. Conf. Comput. Syst. New York, NY, USA: ACM, 2010, pp. 237–
250.
[VERMA08] A. Verma, P. Ahuja, and A. Neogi, “pMapper: Power and migration cost
aware application placement in virtualized systems,” in Proc. of the 9th
ACM/IFIP/USENIX Int. Conf. Middleware. Springer-Verlag New York,
Inc., 2008, pp. 243–264.
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 78
[LIANG12] H. Liang, D. Huang, and D. Peng, “On economic mobile cloud computing
model,” in Mobile Comput. Applic. Serv. Springer Berlin Heidelberg,
2012, vol. 76, pp. 329–341.
[LIANG12-2] H. Liang, L. Cai, D. Huang, X. Shen, and D. Peng, “An SMDPbased
service model for interdomain resource allocation in mobile cloud
networks,” IEEE Trans. Veh. Technol., vol. 61, no. 5, pp. 2222–2232, Jun
2012.
[DIVA14] V. Di Valerio and F. Lo Presti, “Optimal virtual machines allocation in
mobile femto-cloud computing: An mdp approach,” in Proc. Of IEEE
Wireless Commun. Netw. Conf. Work. (WCNCW), Apr. 2014, pp. 7–11.
[YANG_CAO15
]
L. Yang, J. Cao, H. Cheng, and Y. Ji, “Multi-user computation
partitioning for latency sensitive mobile cloud applications,” IEEE Trans.
Comput., vol. 64, no. 8, pp. 2253–2266, Aug. 2015.
[YANG12] L. Yang, J. Cao, and H. Cheng, “Resource constrained multi-user
computation partitioning for interactive mobile cloud applications,”
Technical Report, 2012.
[YANG12-2] L. Yang, J. Cao, S. Tang, T. Li, and A. Chan, “A framework for
partitioning and execution of data stream applications in mobile cloud
computing,” in Proc. of 2012 IEEE 5th Int. Conf. Cloud Comput.
(CLOUD), Jun. 2012, pp. 794–802.
[JIA16] M. Jia, W. Liang, Z. Xu, and M. Huang, “Cloudlet load balancing in
wireless metropolitan area networks,” in Proc. of 35th Ann. IEEE Int.
Conf. Comp. Commun. (INFOCOM), Apr. 2016, pp. 1-9.
[YAO17] D. Yao, L. Gui, F. Hou, F. Sun, D. Mo, and H. Shan, “Load balancing
oriented computation offloading in mobile cloudlet,” in 2017 IEEE 86th
Vehicular Technology Conference (VTC-Fall), Sept. 2017, pp. 1–6.
[MUD09] R. Mudumbai, S. Singh, and U. Madhow, “Medium access control for 60
GHz outdoor mesh networks with highly directional links,” in Proc. of
2009 IEEE INFOCOM, Apr. 2009, pp. 2871–2875.
[WEI14] R. J. R. J. Weiler, M. Peter, W. Keusgen, H. Shimodaira, K. T. Gia, and
K. Sakaguchi, “Outdoor millimeter-wave access for heterogeneous
networks — path loss and system performance,” in 2014 IEEE 25th Ann.
Int. Symp. Pers., Indoor, and Mobile Radio Commun. (PIMRC), Sep.
2014, pp. 2189–2193.
[SAK15] K. Sakaguchi, G. Tran, H. Shimodaira, S. Nanba, T. Sakurai, K.
Takinami, I. Siaud, E. Calvanese Strinati, A. Capone, I. Ingolf Karls, R.
Arefi, and T. Haustein, “Millimeter-wave evolution for 5G cellular
networks,” IEICE Trans. Commun., vol. E98.B, no. 3, pp. 388–402, 2015.
[D2.4] 5G-MiEdge deliverable D2.4, “Method of site specific deployment of
mmWave edge cloud”, Available online at: http://5g-miedge.eu.
[TS23.203] 3GPP TS 23.203 V13.6.0, “Policy and charging control architecture
(Release 13),” 2012
[GIA18] Gia Khanh Tran, Hidekazu Shimodaira, Kei Sakaguchi, “User
Satisfaction Constraint Adaptive Sleeping in 5G mmWave
Deliverable Horizon2020 EUJ-01-2016 723171 5G-MiEdge D3.4
Date :February 2019
Confidential Deliverable
5G-MiEdge Page 79
Heterogeneous Cellular Network,” IEICE Trans. Commun., Vol. E101-
B, No. 10, Oct. 2018.
[GIA18-2] Gia Khanh Tran, Ricardo Santos, Hiroaki Ogawa, Makoto Nakamura,
Kei Sakaguchi, Andreas Kassler, “Context-Based Dynamic Meshed
Backhaul Construction for 5G Heterogeneous Networks,” Special Issue
on Trends, Issues and Challenges toward 5G, Journal of Sensor and
Actuator Networks, MPDI, Sep. 2018.