14
244 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013 Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks Eleftheria Athanasopoulou, Member, IEEE, Loc X. Bui, Associate Member, IEEE, Tianxiong Ji, Member, IEEE, R. Srikant, Fellow, IEEE, and Alexander Stolyar Abstract—Back-pressure-based adaptive routing algorithms where each packet is routed along a possibly different path have been extensively studied in the literature. However, such algo- rithms typically result in poor delay performance and involve high implementation complexity. In this paper, we develop a new adap- tive routing algorithm built upon the widely studied back-pressure algorithm. We decouple the routing and scheduling components of the algorithm by designing a probabilistic routing table that is used to route packets to per-destination queues. The scheduling decisions in the case of wireless networks are made using counters called shadow queues. The results are also extended to the case of networks that employ simple forms of network coding. In that case, our algorithm provides a low-complexity solution to optimally exploit the routing–coding tradeoff. Index Terms—Back-pressure algorithm, network coding, routing, scheduling. I. INTRODUCTION T HE BACK-PRESSURE algorithm introduced in [1] has been widely studied in the literature. While the ideas be- hind scheduling using the weights suggested in that paper have been successful in practice in base stations and routers, the adap- tive routing algorithm is rarely used. The main reason for this is that the routing algorithm can lead to poor delay performance due to routing loops. Additionally, the implementation of the back-pressure algorithm requires each node to maintain per-des- tination queues that can be burdensome for a wireline or wire- less router. Motivated by these considerations, we reexamine the back-pressure routing algorithm in this paper and design a new algorithm that has much superior performance and low im- plementation complexity. Manuscript received January 19, 2010; revised April 25, 2011; accepted March 27, 2012; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor K. Kar. Date of publication May 02, 2012; date of current version February 12, 2013. This work was supported by the ARO under MURI 490405-239012-191100 and MURI W911NF-08-1-0233, the DTRA under Grant HDTRA1-08-1-0016, and the NSF under Grant CNS 07-21286. E. Athanasopoulou and R. Srikant are with the Coordinated Science Laboratory and Department of Electrical and Computer Engineering, Uni- versity of Illinois at Urbana–Champaign, Urbana, IL 61801 USA (e-mail: [email protected]; [email protected]). L. X. Bui is with the School of Engineering, Tan Tao University, Long An, Vietnam (e-mail: [email protected]). T. Ji was with the Coordinated Science Laboratory and Department of Elec- trical and Computer Engineering, University of Illinois at Urbana–Champaign, Urbana, IL 61801 USA. He is now with Google, Inc., Mountain View, CA 94043 USA (e-mail: [email protected]). A. Stolyar is with Bell Labs, Alcatel-Lucent, Murray Hill, NJ 07974 USA (e-mail: [email protected]). Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TNET.2012.2195503 Prior work in this area [2] has recognized the importance of doing shortest-path routing to improve delay performance and modied the back-pressure algorithm to bias it toward taking shortest-hop routes. A part of our algorithm has similar moti- vating ideas. In addition to provably throughput-optimal routing that minimizes the number of hops taken by packets in the net- work, we decouple (to a certain degree) routing and scheduling in the network through the use of probabilistic routing tables and the so-called shadow queues. The min-hop routing idea was studied rst in a conference paper [3], and shadow queues were introduced in [4] and [5], but the key step of partially decoupling the routing and scheduling which leads to both signicant delay reduction and the use of per-next-hop queueing is original here. In [4], the authors introduced the shadow queue to solve a xed routing problem. The min-hop routing idea is also studied in [6], but their solution requires even more queues than the original back-pressure algorithm. Compared to [4], the main purpose of this paper is to study if the shadow queue approach extends to the case of scheduling and routing. The rst contribution is to come up with a formulation where the number of hops is min- imized. It is interesting to contrast this contribution with [6]. The formulation in [6] has the same objective as ours, but their solution involves per-hop queues, which dramatically increases the number of queues, even compared to the back-pressure al- gorithm. Our solution is signicantly different: We use the same number of shadow queues as the back-pressure algorithm, but the number of real queues is very small (per neighbor). The new idea here is to perform routing via probabilistic splitting, which allows the dramatic reduction in the number of real queues. Fi- nally, an important observation in this paper, not found in [4], is that the partial ”decoupling” of shadow back-pressure and real packet transmission allows us to activate more links than a regular back-pressure algorithm would. This idea appears to be essential to reduce delays in the routing case, as shown in the simulations. We also consider networks where simple forms of network coding are allowed [7]. In such networks, a relay between two other nodes XORs packets and broadcasts them to decrease the number of transmissions. There is a tradeoff between choosing long routes to possibly increase network coding opportunities (see the notion of reverse carpooling in [8]) and choosing short routes to reduce resource usage. Our adaptive routing algorithm can be modied to automatically realize this tradeoff with good delay performance. In addition, network coding requires each node to maintain more queues [9], and our routing solution at least reduces the number of queues to be maintained for routing purposes, thus partially mitigating the problem. An ofine 1063-6692/$31.00 © 2012 IEEE

Back Pressure Based Packet by Packet Adaptive

Embed Size (px)

DESCRIPTION

read this

Citation preview

Page 1: Back Pressure Based Packet by Packet Adaptive

244 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013

Back-Pressure-Based Packet-by-Packet AdaptiveRouting in Communication Networks

Eleftheria Athanasopoulou, Member, IEEE, Loc X. Bui, Associate Member, IEEE, Tianxiong Ji, Member, IEEE,R. Srikant, Fellow, IEEE, and Alexander Stolyar

Abstract—Back-pressure-based adaptive routing algorithmswhere each packet is routed along a possibly different path havebeen extensively studied in the literature. However, such algo-rithms typically result in poor delay performance and involve highimplementation complexity. In this paper, we develop a new adap-tive routing algorithm built upon the widely studied back-pressurealgorithm. We decouple the routing and scheduling componentsof the algorithm by designing a probabilistic routing table that isused to route packets to per-destination queues. The schedulingdecisions in the case of wireless networks are made using counterscalled shadow queues. The results are also extended to the caseof networks that employ simple forms of network coding. Inthat case, our algorithm provides a low-complexity solution tooptimally exploit the routing–coding tradeoff.

Index Terms—Back-pressure algorithm, network coding,routing, scheduling.

I. INTRODUCTION

T HE BACK-PRESSURE algorithm introduced in [1] hasbeen widely studied in the literature. While the ideas be-

hind scheduling using the weights suggested in that paper havebeen successful in practice in base stations and routers, the adap-tive routing algorithm is rarely used. The main reason for thisis that the routing algorithm can lead to poor delay performancedue to routing loops. Additionally, the implementation of theback-pressure algorithm requires each node tomaintain per-des-tination queues that can be burdensome for a wireline or wire-less router. Motivated by these considerations, we reexaminethe back-pressure routing algorithm in this paper and design anew algorithm that has much superior performance and low im-plementation complexity.

Manuscript received January 19, 2010; revised April 25, 2011; acceptedMarch 27, 2012; approved by IEEE/ACM TRANSACTIONS ON NETWORKINGEditor K. Kar. Date of publication May 02, 2012; date of current versionFebruary 12, 2013. This work was supported by the ARO under MURI490405-239012-191100 and MURI W911NF-08-1-0233, the DTRA underGrant HDTRA1-08-1-0016, and the NSF under Grant CNS 07-21286.E. Athanasopoulou and R. Srikant are with the Coordinated Science

Laboratory and Department of Electrical and Computer Engineering, Uni-versity of Illinois at Urbana–Champaign, Urbana, IL 61801 USA (e-mail:[email protected]; [email protected]).L. X. Bui is with the School of Engineering, Tan Tao University, Long An,

Vietnam (e-mail: [email protected]).T. Ji was with the Coordinated Science Laboratory and Department of Elec-

trical and Computer Engineering, University of Illinois at Urbana–Champaign,Urbana, IL 61801USA.He is nowwith Google, Inc., Mountain View, CA 94043USA (e-mail: [email protected]).A. Stolyar is with Bell Labs, Alcatel-Lucent, Murray Hill, NJ 07974 USA

(e-mail: [email protected]).Color versions of one or more of the figures in this paper are available online

at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TNET.2012.2195503

Prior work in this area [2] has recognized the importance ofdoing shortest-path routing to improve delay performance andmodified the back-pressure algorithm to bias it toward takingshortest-hop routes. A part of our algorithm has similar moti-vating ideas. In addition to provably throughput-optimal routingthat minimizes the number of hops taken by packets in the net-work, we decouple (to a certain degree) routing and schedulingin the network through the use of probabilistic routing tablesand the so-called shadow queues. The min-hop routing idea wasstudied first in a conference paper [3], and shadow queues wereintroduced in [4] and [5], but the key step of partially decouplingthe routing and scheduling which leads to both significant delayreduction and the use of per-next-hop queueing is original here.In [4], the authors introduced the shadow queue to solve a fixedrouting problem. The min-hop routing idea is also studied in [6],but their solution requires even more queues than the originalback-pressure algorithm. Compared to [4], the main purpose ofthis paper is to study if the shadow queue approach extends tothe case of scheduling and routing. The first contribution is tocome up with a formulation where the number of hops is min-imized. It is interesting to contrast this contribution with [6].The formulation in [6] has the same objective as ours, but theirsolution involves per-hop queues, which dramatically increasesthe number of queues, even compared to the back-pressure al-gorithm. Our solution is significantly different: We use the samenumber of shadow queues as the back-pressure algorithm, butthe number of real queues is very small (per neighbor). The newidea here is to perform routing via probabilistic splitting, whichallows the dramatic reduction in the number of real queues. Fi-nally, an important observation in this paper, not found in [4],is that the partial ”decoupling” of shadow back-pressure andreal packet transmission allows us to activate more links thana regular back-pressure algorithm would. This idea appears tobe essential to reduce delays in the routing case, as shown in thesimulations.We also consider networks where simple forms of network

coding are allowed [7]. In such networks, a relay between twoother nodes XORs packets and broadcasts them to decrease thenumber of transmissions. There is a tradeoff between choosinglong routes to possibly increase network coding opportunities(see the notion of reverse carpooling in [8]) and choosing shortroutes to reduce resource usage. Our adaptive routing algorithmcan be modified to automatically realize this tradeoff with gooddelay performance. In addition, network coding requires eachnode to maintain more queues [9], and our routing solution atleast reduces the number of queues to be maintained for routingpurposes, thus partially mitigating the problem. An offline

1063-6692/$31.00 © 2012 IEEE

Page 2: Back Pressure Based Packet by Packet Adaptive

ATHANASOPOULOU et al.: BACK-PRESSURE-BASED PACKET-BY-PACKET ADAPTIVE ROUTING IN COMMUNICATION NETWORKS 245

algorithm for optimally computing the routing–coding tradeoffwas proposed in [10]. Our optimization formulation bearssimilarities to this work, but our main focus is on designinglow-delay online algorithms. Back-pressure solutions to net-work coding problems have also been studied in [11]–[13], butthe adaptive routing–coding tradeoff solution that we proposehere has not been studied previously.We summarize our main results as follows.• Using the concept of shadow queues, we partially decouplerouting and scheduling. A shadow network is used to up-date a probabilistic routing table that packets use upon ar-rival at a node. The same shadow network, with back-pres-sure algorithm, is used to activate transmissions betweennodes. However, first, actual transmissions send packetsfrom first-in–first-out (FIFO) per-link queues, and second,potentially more links are activated, in addition to those ac-tivated by the shadow algorithm.

• The routing algorithm is designed to minimize the averagenumber of hops used by packets in the network. This idea,along with the scheduling/routing decoupling, leads todelay reduction compared with the traditional back-pres-sure algorithm.

• Each node has to maintain counters, called shadow queues,per destination. This is very similar to the idea of main-taining a routing table per destination. However, the realqueues at each node are per-next-hop queues in the case ofnetworks that do not employ network coding. When net-work coding is employed, per-previous-hop queues mayalso be necessary, but this is a requirement imposed by net-work coding, not by our algorithm.

• The algorithm can be applied to wireline and wirelessnetworks. Extensive simulations show dramatic improve-ment in delay performance compared to the back-pressurealgorithm.

The rest of this paper is organized as follows. We present thenetwork model in Section II. In Sections III and IV, the tradi-tional back-pressure algorithm and its modified version are in-troduced. We develop our adaptive routing and scheduling al-gorithm for wireline and wireless networks with and withoutnetwork coding in Sections V–VII. In Section VIII, the simula-tion results are presented. We conclude our paper in Section IX.

II. NETWORK MODEL

We consider a multihop wireline or wireless network repre-sented by a directed graph , where is the set ofnodes and is the set of directed links. A directed link that cantransmit packets from node to node is denoted by .We assume that time is slotted and define the link capacity tobe the maximum number of packets that link can transmitin one time-slot.Let be the set of flows that share the network. Each flow

is associated with a source node and a destination node, but noroute is specified between these nodes. This means that the routecan be quite different for packets of the same flow. Let and

be source and destination nodes, respectively, of flow .Let be the rate (packets/slot) at which packets are generatedby flow . If the demand on the network, i.e., the set of flowrates, can be satisfied by the available capacity, there must exist

a routing algorithm and a scheduling algorithm such that the linkrates lie in the capacity region. To precisely state this condition,we define to be the rate allocated on link to packetsdestined for node . Thus, the total rate allocated to all flowsat link is given by . Clearly, for thenetwork to be able to meet the traffic demand, we should have

where is the capacity region of the network for 1-hop traffic.The capacity region of the network for 1-hop traffic containsall sets of rates that are stabilizable by some kind of schedulingpolicy assuming all traffics are 1-hop traffic. As a special case,in the wireline network, the constraints are

As opposed to , let denote the capacity region of the mul-tihop network, i.e., for any set of flows , thereexists some routing and scheduling algorithms that stabilize thenetwork.In addition, a flow conservation constraint must be satisfied at

each node, i.e., the total rate at which traffic can possibly arriveat each node destined to must be less than or equal to the totalrate at which traffic can depart from the node destined to

(1)

where denotes the indicator function. Given a set of arrivalrates that can be accommodated by the network,one version of the multicommodity flow problem is to find thetraffic splits such that (1) is satisfied. However, finding theappropriate traffic split is computationally prohibitive and re-quires knowledge of the arrival rates. The back-pressure algo-rithm to be described next is an adaptive solution to the multi-commodity flow problem.

III. THROUGHPUT-OPTIMAL BACK-PRESSURE ALGORITHMAND ITS LIMITATIONS

The back-pressure algorithm was first described in [1] inthe context of wireless networks and independently discoveredlater in [14] as a low-complexity solution to certain multicom-modity flow problems. This algorithm combines the schedulingand routing functions together. While many variations of thisbasic algorithm have been studied, they primarily focus onmaximizing throughput and do not consider QoS performance.Our algorithm uses some of these ideas as building blocks, andtherefore, we first describe the basic algorithm, its drawbacksand some prior solutions.The algorithm maintains a queue for each destination at each

node. Since the number of destinations can be as large as thenumber of nodes, this per-destination queueing requirement canbe quite large for practical implementation in a network. At eachlink, the algorithm assigns a weight to each possible destina-tion that is called back-pressure. Define the back-pressure atlink for destination at slot to be

Page 3: Back Pressure Based Packet by Packet Adaptive

246 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013

where denotes the number of packets at node destinedfor node at the beginning of time-slot . Under this notation,

. Assign a weight to each link , whereis defined to be the maximum back-pressure over all pos-

sible destinations, i.e.,

Let be the destination which has the maximum weight onlink

(2)

If there are ties in the weights, they can be broken arbitrarily.Packets belonging to destination are scheduled for trans-mission over the activated link . A schedule is a set of linksthat can be activated simultaneously without interfering witheach other. Let denote the set of all schedules. The back-pres-sure algorithm finds an optimal schedule that is derivedfrom the optimization problem

(3)

Specially, if the capacity of every link has the same value, thechosen schedule maximizes the sum of weights in any schedule.At time , for each activated link , we remove

packets from if possible, and transmit those packets to.We assume that the departures occur first in a time-slot,

and external arrivals and packets transmitted over a link ina particular time-slot are available to node at the next time-slot.Thus, the evolution of the queue is as follows:

(4)

where is the number of packets transmitted over linkin time-slot and is the number of packets generated byflow at time . It has been shown in [1] that the back-pressurealgorithm maximizes the throughput of the network.A key feature of the back-pressure algorithm is that packets

may not be transferred over a link unless the back-pressure overa link is nonnegative and the link is included in the pickedschedule. This feature prevents further congesting nodes thatare already congested, thus providing the adaptivity of the al-gorithm. Notice that because all links can be activated withoutinterfering with each other in the wireline network, is the setof all links. Thus, the back-pressure algorithm can be localizedat each node and operated in a distributed manner in the wire-line network.The back-pressure algorithm has several disadvantages that

prohibit practical implementation.

• The back-pressure algorithm requires maintaining queuesfor each potential destination at each node. This queuemanagement requirement could be a prohibitive overheadfor a large network.

• The back-pressure algorithm is an adaptive routing algo-rithm that explores the network resources and adapts todifferent levels of traffic intensity. However, it might alsolead to high delays because it may choose long paths un-necessarily. High delays are also a result of maintaininga large number of queues at each node and each of thosequeues being large. The queues can be large because, underback-pressure algorithm, average size of a per-destinationqueue at a node can growwith the “distance” from the nodeto the destination. Furthermore, a large number of queuestakes away statistical multiplexing advantage: Since onlyone queue can be scheduled at a time, some of the allocatedtransmission capacity can be left unused if the scheduledqueue is too short this can contribute to high latency aswell.

In this paper, we address the high delay and queueing com-plexity issues. The computational complexity issue for wire-less networks is not addressed here. We simply use the recentlystudied greedy maximal scheduling (GMS) algorithm. Here, wecall it the largest-weight-first algorithm—in short, LWF algo-rithm. LWF algorithm requires the same queue structure that theback-pressure algorithm uses. It also calculates the back-pres-sure at each link using the same way. The difference betweenthese two algorithms only lies in the methods to pick a schedule.Let denote the set of all links initially. Let be the setof links within the interference range of link including it-self. At each time-slot, the LWF algorithm picks a link withthe maximum weight first and removes links within the interfer-ence range of link from , i.e., . Then, it picksthe link with the maximum weight in the updated set , and soforth. It should be noticed that LWF algorithm reduces the com-putational complexity with a price of the reduction of the net-work capacity region. The LWF algorithmwhere the weights arequeue lengths (not back-pressures) has been extensively studiedin [15]–[19]. While these studies indicate that there may be re-duction in throughput due to LWF in certain special networktopologies, it seems to perform well in practice, and so we adoptit here for simulations.In the rest of this paper, we present our main results, which

eliminate many of the problems associated with the back-pres-sure algorithm.

IV. MIN-RESOURCE ROUTING USING BACK-PRESSUREALGORITHM

As mentioned in Section III, the back-pressure algorithm ex-plores all paths in the network and, as a result, may choose pathsthat are unnecessarily long, which may even contain loops, thusleading to poor performance. We address this problem by in-troducing a cost function that measures the total amount of re-sources used by all flows in the network. Specially, we add uptraffic loads on all links in the network and use this as our costfunction. The goal then is to minimize this cost subject to net-work capacity constraints.

Page 4: Back Pressure Based Packet by Packet Adaptive

ATHANASOPOULOU et al.: BACK-PRESSURE-BASED PACKET-BY-PACKET ADAPTIVE ROUTING IN COMMUNICATION NETWORKS 247

Given a set of packet arrival rates that lie within the capacityregion, our goal is to find the routes for flows so that we use asfew resources as possible in the network. Thus, we formulatethe following optimization problem:

s.t.

(5)

We now show how a modification of the back-pressure algo-rithm can be used to solve this min-resource routing problem.(Note that similar approaches have been used in [20]–[24] tosolve related resource allocation problems.)Let be the Lagrange multipliers corresponding to the

flow conservation constraints in problem (5). Appending theseconstraints to the objective, we get

(6)

If the Lagrange multipliers are known, then the optimal canbe found by solving

where . The form of the constraintsin (5) suggests the following update algorithm to compute :

(7)

where is a step-size parameter. See [25] for details. Noticethat looks very much like a queue update equation,except for the fact that arrivals into from other links maybe smaller than when does not have enough packets.This suggests the following algorithm.

A. Min-Resource Routing by Back-Pressure

At time-slot , the following applies.

Fig. 1. Link weights under the -back-pressure algorithm.

• Each node maintains a separate queue of packets for eachdestination ; its length is denoted . Each link isassigned a weight

(8)

where is a parameter.• Scheduling/routing rule:

(9)

• For each activated link , we removepackets from if possible and transmit thosepackets to , where achieves the maximumin (8).

Note that the above algorithm does not change if we replacethe weights in (8) by the following, rescaled ones:

(10)

and therefore, compared to the traditional back-pressure sched-uling/routing, the only difference is that each link weight isequal to the maximum differential backlogminus parameter .( reverts the algorithm to the traditional one.) For sim-plicity, we call this algorithm -back-pressure algorithm.The performance of the stationary process that is “produced”

by the algorithm with fixed parameter is within of theoptimal as goes to (analogous to the proofs in [21] and[22]; see also the related proof in [23] and [24])

where is an optimal solution to (5).Fig. 1 illustrates how the -back-pressure algorithm works

in a simple wireline network. All links can be activate simul-taneously without interfering with each other. Notice that thebacklog difference of route 1 is 6 and the backlog differenceof route 2 is 4. Because the backlog difference of route 2 issmaller than , route 2 is blocked at current traffic load. The-back-pressure algorithm will automatically choose route 1,

which is shorter. Therefore, a proper can avoid long routesin when the traffic is not close to capacity.Although -back-pressure algorithm could reduce the delay

by forcing flows to go through shorter routes, simulations in-dicate a significant problem with the basic algorithm presentedabove. A link can be scheduled only if the back-pressure of atleast one destination is greater than or equal to . Thus, atlight to moderate traffic loads, the delays could be high sincethe back-pressure may not build up sufficiently fast. In order to

Page 5: Back Pressure Based Packet by Packet Adaptive

248 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013

overcome all these adverse issues, we develop a new routingalgorithm in the following section. The solution also simplifiesthe queueing data structure to be maintained at each node.

V. PARN: PACKET-BY-PACKET ADAPTIVE ROUTING ANDSCHEDULING ALGORITHM FOR NETWORKS

In this section, we present our adaptive routing and sched-uling algorithm. We will call it Packet-by-Packet AdaptiveRouting for Networks—PARN for ease for repeated referencelater. First, we introduce the queue structure that is used inPARN.In the traditional back-pressure algorithm, each node has to

maintain a queue for each destination . Let and de-note the number of nodes and the number of destinations in thenetwork, respectively. Each node maintains queues. Gen-erally, each pair of nodes can communicate along a path con-necting them. Thus, the number of queues maintained at eachnode can be as high as one less than the number of nodes in thenetwork, i.e., .Instead of keeping a queue for every destination, each nodemaintains a queue for every neighbor , which is called a

real queue. Notice that real queues are per-neighbor queues. Letdenote the number of neighbors of node , and let

. The number of queues at each node is no greaterthan . Generally, is much smaller than . Thus, thenumber of queues at each node is much smaller compared to thecase using the traditional back-pressure algorithm.In additional to real queues, each node also maintains a

counter, which is called shadow queue, for each destina-tion . Unlike the real queues, counters are much easier to main-tain even if the number of counters at each node grows linearlywith the size of the network. A back-pressure algorithm run onthe shadow queues is used to decide which links to activate. Thestatistics of the link activation are further used to route packetsto the per-next-hop neighbor queues mentioned earlier. The de-tails are explained next.

A. Shadow Queue Algorithm: -Back-Pressure Algorithm

The shadow queues are updated based on the movement offictitious entities called shadow packets in the network. Themovement of the fictitious packets can be thought of as an ex-change of control messages for the purposes of routing andschedule. Just like real packets, shadow packets arrive from out-side the network and eventually exit the network. The externalshadow packet arrivals are general as follows: When an exoge-nous packet arrives at node to the destination , the shadowqueue is incremented by and is further incremented by 1with probability in addition. Thus, if the arrival rate of a flowis , then the flow generates “shadow traffic” at a rate .In other words, the incoming shadow traffic in the network is

times of the incoming real traffic.The back-pressure for destination on link is taken to

be

where is a properly chosen parameter. The choice of willbe discussed in Section VIII.

The evolution of the shadow queue is

(11)

where is the number of shadow packets transmitted overlink in time-slot , is the destination that has themax-imum weight on link , and is the number of shadowpackets generated by flow at time . The number of shadowpackets scheduled over the links at each time instant is deter-mined by the back-pressure algorithm in (9).From the above description, it should be clear that the shadow

algorithm is the same as the traditional back-pressure algorithm,except that it operates on the shadow queueing system with anarrival rate slightly larger than the real external arrival rate ofpackets. Note the shadow queues do not involve any queueingdata structure at each node; there are no packets to maintainin a FIFO order in each queue. The shadow queue is simply acounter that is incremented by 1 upon a shadow packet arrivaland decremented by 1 upon a departure.The back-pressure algorithm run on the shadow queues is

used to activate the links. In other words, if in (9), thenlink is activated and packets are served from the real queueat the link in a first-in–first-out fashion. This is, of course, verydifferent from the traditional back-pressure algorithm where alink is activated to serve packets to a particular destination.Thus, we have to develop a routing scheme that assigns packetsarriving to a node to a particular next-hop neighbor so that thesystem remains stable. We design such an algorithm next.

B. Adaptive Routing Algorithms

Now we discuss how a packet is routed once it arrives at anode. Let us define a variable to be the number of shadowpackets “transferred” from node to node for destinationduring time-slot by the shadow queue algorithm. Let us denoteby the expected value of , when the shadow queueingprocess is in a stationary regime; let denote an estimateof , calculated at time . (In the simulations, we use the ex-ponential averaging, as specified in the next section.)At each time-slot, the following sequence of operations oc-

curs at each node . A packet arriving at node for destina-tion is inserted in the real queue for next-hop neighborwith probability

(12)

Thus, the estimates are used to perform routing opera-tions: In today’s routers, based on the destination of a packet, apacket is routed to its next hop based on routing table entries. In-stead, here the ’s are used to probabilistically choose the nexthop for a packet. Packets waiting at link are transmittedover the link when that link is scheduled (see Fig. 2).The first question that onemust ask about the above algorithm

is whether it is stable if the packet arrival rates from flows are

Page 6: Back Pressure Based Packet by Packet Adaptive

ATHANASOPOULOU et al.: BACK-PRESSURE-BASED PACKET-BY-PACKET ADAPTIVE ROUTING IN COMMUNICATION NETWORKS 249

Fig. 2. Probabilistic splitting algorithm at node .

within the capacity region of the multihop network. This is a dif-ficult question in general. Since the shadow queues are positiverecurrent, “good” estimates can be maintained by simpleaveraging (e.g., as specified in the next section), and thereforethe probabilities in (12) will stay close to their “ideal” values

The following theorem asserts that the real queues are stableunder the additional assumption that the routing probabili-ties are fixed at their ideal values (as opposed to beingupdated via (12), which is what the actual algorithm does).Theorem 1: Suppose the routing probabilities are fixed at. Assume that there exists a delta such that

lies in . Let be the number of packets arriving from flowat time-slot , with and . As-sume that the arrival process is independent across time-slotsand flows (this assumption can be considerably relaxed). Then,the Markov chain, jointly describing the evolution of shadowqueues and real FIFO queues (whose state include the destina-tion of the real packet in each position of each FIFO queue), ispositive recurrent.

Proof: The key ideas behind the proof are outlined. Thedetails are similar to the proof in [4] and are omitted.• The average rate at which packets arrive to link isstrictly smaller than the capacity allocated to the link bythe shadow process if . (This fact is verified inAppendix A.)

• It follows that the fluid limit of the real-queue process issame as that of the networks in [26]. Such fluid limit isstable [26], which implies the stability of our process aswell.

VI. IMPLEMENTATION DETAILS

The algorithm presented in Section V ensures that the queuelengths are stable. In this section, we discuss a number of en-hancements to the basic algorithm to improve performance.

A. Exponential Averaging

To compute we use the following iterative exponentialaveraging algorithm:

(13)

where .

B. Token Bucket Algorithm

Computing the average shadow rate and generatingrandom numbers for routing packets may impose a computa-tional overhead of routers, which should be avoided if possible.Thus, as an alternative, we suggest the following simple algo-rithm. At each node , for each next-hop neighbor and eachdestination , maintain a token bucket . Consider the shadowtraffic as a guidance of the real traffic, with tokens removed asshadow packets traverse the link. In detail, the token bucket isdecremented by in each time-slot, but cannot go belowthe lower bound 0

When , we say thattokens (associated with bucket ) are “wasted” in slot .

Upon a packet arrival at node for destination , find the tokenbucket that has the smallest number of tokens (the mini-mization is over next-hop neighbors ), breaking ties arbitrarily,add the packet to the corresponding real queue , and add onetoken to the corresponding bucket

(14)

To explain how this algorithm works, denote by theaverage value of (in stationary regime), and by theaverage rate at which real packets for destination arrive atnode . Due to the fact that real traffic is injected by eachsource at the rate strictly less than the shadow traffic, we have

(15)

For a single-node network, (15) just means that arrival rate isless than available capacity. More generally, it is an assumptionthat needs to be proved. However, here our goal is to provide anintuition behind the token bucket algorithm, so we simply as-sume (15). Condition (15) guarantees that the token processesare stable (that is, roughly, they cannot run away to infinity)since the total arrival rate to the token buckets at a node is lessthan the total service rate and the arrivals employ a join-the-shortest-queue discipline. Moreover, since are randomprocesses, the token buckets will “hit 0” in a nonzero fraction oftime-slots, except in some degenerate cases; this in turn meansthat the arrival rate of packets at the token bucket must be lessthan the token generation rate

(16)

where is the actual rate at which packets arriving at anddestined for are routed along link . Inequality (16) thusdescribes the idea of the algorithm.Ideally, in addition to (16), we would like to have the ratios

to be equal across all , i.e., the real packet arrivalrates at the outgoing links of a node should be proportional tothe shadow service rates. It is not difficult to see that if isvery small, the proportion will be close to ideal. In general, thetoken-based algorithm does not guarantee that is why it is anapproximation.

Page 7: Back Pressure Based Packet by Packet Adaptive

250 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013

Also, to ensure implementation correctness, instead of (14),we use

(17)

i.e., the value of is not allowed to go above somerelatively large value , which is a parameter of the orderof . Under “normal circumstances,” “hitting”ceiling is a rare event, occurring due to the process ran-domness. The main purpose of having the upper bound isto detect serious anomalies when, for whatever reason, thecondition (15) “breaks” for prolonged periods of time—such asituation is detected when any hits the upper boundfrequently.

C. Extra Link Activation

Under the shadow back-pressure algorithm, only links withback-pressure greater than or equal to can be activated. Thestability theory ensures that this is sufficient to render the realqueues. On the other hand, the delay performance can still beunacceptable. Recall that the parameter was introduced todiscourage the use of unnecessarily long paths. However, underlight and moderate traffic loads, the shadow back-pressure at alink may be frequently less than , and thus, packets at suchlinks may have to wait a long time before they are processed.One way to remedy the situation is to activate additional linksbeyond those activated by the shadow back-pressure algorithm.The basic idea is as follows. In each time-slot, first run the

shadow back-pressure algorithm. Then, add additional links tomake the schedule maximal. If the extra activation proceduredepends only on the state of shadow queues (but beyond that,can be random and/or arbitrarily complex), then the stability re-sult of Theorem 1 still holds (with essentially same proof). Infor-mally, the stability prevails because the shadow algorithm aloneprovides sufficient average throughput on each link, and addingextra capacity “does not hurt”; thus, with such extra activation,a certain degree of “decoupling” between routing (totally con-trolled by shadow queues) and scheduling (also controlled byshadow queues, but not completely) is achieved.For example, in the case of wireline networks, by the above

arguments, all links can be activated all the time. The shadowrouting algorithm ensures that the arrival rate at each link is lessthan its capacity. In this case, the complete decoupling of routingand scheduling occurs.In practice, activating extra links that have large queue back-

logs leads to better performance than activating an arbitrary setof extra links. However, in this case, the extra activation proce-dure depends on the state of real queues, which makes the issueof validity of an analog of Theorem 1 much more subtle. Webelieve that the argument in this section provides a good moti-vation for our algorithm, which is confirmed by simulations.

D. Choice of the Parameter

From basic queueing theory, we expect the delay at each linkto be inversely proportional to the mean capacity minus the ar-rival rate at the link. In a wireless network, the capacity at alink is determined by the shadow scheduling algorithm. Thiscapacity is guaranteed to be at least equal to the shadow arrival

Fig. 3. Network coding opportunity.

rate. The arrival rate of real packets is of course smaller. Thus,the difference between the link capacity and arrival rate couldbe proportional to epsilon. Thus, epsilon should be sufficientlylarge to ensure small delays while it should be sufficiently smallto ensure that the capacity region is not diminished significantly.In our simulations, we found that choosing provides agood tradeoff between delay and network throughput.In the case of wireline networks, recall from Section VI-C that

all links are activated. Therefore, the parameter epsilon plays norole here.

VII. EXTENSION TO THE NETWORK CODING CASE

In this section, we extend our approach to consider networkswhere network coding is used to improve throughput. We con-sider a simple form of network coding illustrated in Fig. 3.Whenand each have a packet to send to the other through an inter-mediate relay , traditional transmission requires the followingset of transmissions: Send a packet from to , then to , fol-lowed by to and to . Instead, using network coding, onecan first send from to , then to , XOR the two packets andbroadcast the XORed packet from to both and . This form ofnetwork coding reduces the number of transmissions from fourto three. However, the network coding can improve throughputonly if such coding opportunities are available in the network.Routing plays an important role in determining whether suchopportunities exist. In this section, we design an algorithm to au-tomatically find the right tradeoff between using possibly longroutes to provide network coding opportunities and the delayincurred by using long routes.

A. System Model

We still consider the wireless network represented by thegraph . Let be the rate (packets/slot) at whichpackets are generated by flow . To facilitate network coding,each node must not only keep track of the destination of thepacket, but also remember the node from which a packet wasreceived. Let be the rate at which packets received fromeither node or flow , destined for node , are scheduled overlink . Note that, for compactness of notation, we allow inthe definition of to denote either a flow or a node. We as-sume is zero when such a transmission is not feasible, i.e.,when is not the source node or is not the destination nodeof flow , or if or is not in . At node , the networkcoding scheme may generate a coded packet by “XORing” twopackets received from previous-hop nodes and destined forthe destination nodes and , respectively, and broadcast thecoded packet to nodes and . Let denote the rate at whichcoded packets can be transferred from node to nodes anddestined for nodes and , respectively. Notice that, due tosymmetry, the following equality holds . Assume

to be zero if at least one of , and does

Page 8: Back Pressure Based Packet by Packet Adaptive

ATHANASOPOULOU et al.: BACK-PRESSURE-BASED PACKET-BY-PACKET ADAPTIVE ROUTING IN COMMUNICATION NETWORKS 251

not belong to . Note that when or , and

when or .There are two kinds of transmissions in our network model:

point-to-point transmissions and broadcast transmissions. Thetotal point-to-point rate at which packets received externally orfrom a previous-hop node are scheduled on link and des-tined to is denoted by

and the total broadcast rate at which packets scheduled onlink destined to is denoted by

The total point-to-point rate on link is denoted by

and the total broadcast rate at which packets are broadcast fromnode to nodes and is denoted by

Let be the set of rates including all point-to-point transmis-sions and broadcast transmissions, i.e.,

The multihop traffic should also satisfy the flow conservationconstraints.Flow Conservation Constraints: For each node each

neighbor , and each destination , we have

(18)

where the left-hand side denotes the total incoming traffic rateat link destined to , and the right-hand side denotes the totaloutgoing traffic rate from link destined to . For each nodeand each destination , we have

(19)

where denotes the indicator function.

B. Links and Schedules

We allow broadcast transmission in our network model. Inorder to define a schedule, we first define two kinds of “links”:the point-to-point link and the broadcast link. A point-to-pointlink is a link that supports point-to-point transmission,where . A broadcast link is a “link” that con-tains links and and supports broadcast transmission.Let denote the set of all broadcast links, thus . Letbe the union of the set of the point-to-point links and the

set of the broadcast links , i.e., .

We let denote the set of links that can be activated simul-taneously. By abusing notation, can be thought of as a set ofvectors where each vector is a list of 1’s or 0’s, where a 1 corre-sponds to an active link and a 0 corresponds to an inactive link.Then, the capacity region of the network for 1-hop traffic is theconvex hull of all schedules, i.e., . Thus, .

C. Queue Structure and Shadow Queue Algorithm

Each node maintains a set of counters, which are calledshadow queues, for each previous hop and each desti-nation , and for external flows destined for at node .Each node also maintains a real queue, denoted by , foreach previous hop and each next-hop neighbor , and forexternal flows with their next hop .By solving the optimization problem with flow conservation

constraints, we can work out the back-pressure algorithm fornetwork coding case (see the brief description in Appendix B).More specifically, for each link in the network and foreach destination , define the back-pressure at every slot to be

where

and

(20)

For each broadcast at node to nodes and destined for and, respectively, define the back-pressure at every slot to be

(21)

The weights associated with each point-to-point linkand each broadcast link are defined as follows:

(22)

The rate vector at each time-slot is chosen to satisfy

By running the shadow queue algorithm in network coding case,we get a set of activated links in at each slot.Next, we describe the evolution of the shadow queue lengths

in the network. Notice that the shadow queues at each nodeare distinguished by their previous hop and their destination ,

Page 9: Back Pressure Based Packet by Packet Adaptive

252 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013

so only accepts the packets from previous hop with desti-nation . The similar rule should be followed when packets aredrained from the shadow queue . We assume the departuresoccur before arrivals at each slot, and the evolution of queues isgiven by

(23)

where is the actual number of shadow packets scheduledover link and destined for from the shadow queueat slot is the actual number of coded shadow packetstransfered from node to nodes and destined for nodesand at slot , and denotes the actual number of shadowpackets from external flow received at node destined for .

D. Implementation Details

The implementation details of the joint adaptive routing andcoding algorithm are similar to the case with adaptive routingonly, but the notation is more cumbersome. We briefly describeit here.1) Probabilistic Splitting Algorithm: The probabilistic split-

ting algorithm chooses the next hop of the packet based onthe probabilistic routing table. Let be the probability ofchoosing node as the next hop once a packet destined for re-ceives at node from previous hop or from external flows, i.e.,

at slot . Assume that if . Obviously,. Let denote the number of potential

shadow packets “transferred” from node to node destinedfor whose previous hop is during time-slot . Notice that thepacket comes from an external flow if . Also notice that

is contributed by shadow traffic point-to-point transmis-sion as well as shadow traffic broadcast transmission, i.e.,

We keep track of the the average value of across time byusing the following updating process:

(24)

where . The splitting probability is expressedas follows:

(25)

Fig. 4. Sprint GMPLS network topology of North America with 31 nodes.[27].

2) Token Bucket Algorithm: At each node , for each pre-vious-hop neighbor , next-hop neighbor and each destina-tion , we maintain a token bucket . At each time-slot , thetoken bucket is decremented by , but cannot go below thelower bound

When , we say to-kens (associated with bucket ) are “wasted” in slot . Upona packet arrival from previous hop at node for destinationat slot , we find the token bucket that has the smallestnumber of tokens (the minimization is over next-hop neigh-bors ), breaking ties arbitrarily, add the packet to the corre-sponding real queue , and add one token from the corre-sponding bucket

E. Extra Link Activation

Like the case without network coding, extra link activationcan reduce delays significantly. As in the case without networkcoding, we add additional links to the schedule based on thequeue lengths at each link. For extra link activation purposes,we only consider point-to-point links and not broadcast. Thus,we schedule additional point-to-point links by giving priority tothose links with larger queue backlogs.

VIII. SIMULATIONS

We consider two types of networks in our simulations: wire-line and wireless. Next, we describe the topologies and simu-lation parameters used in our simulations, and then present oursimulation results.

A. Simulation Settings

1) Wireline Setting: The network shown in Fig. 4 has31 nodes and represents the GMPLS network topology ofNorth America [27]. Each link is assume to be able to transmitone packet in each slot. We assume that the arrival processis a Poisson process with parameter , and we consider thearrivals that come within a slot are considered for service atthe beginning of the next slot. Once a packet arrives froman external flow at a node , the destination is decided by

Page 10: Back Pressure Based Packet by Packet Adaptive

ATHANASOPOULOU et al.: BACK-PRESSURE-BASED PACKET-BY-PACKET ADAPTIVE ROUTING IN COMMUNICATION NETWORKS 253

Fig. 5. Wireless network topology with 30 nodes.

probability mass function , , where isthe probability that a packet is received externally at nodedestined for . Obviously, , and . The

probability is calculated by

where denotes the number of neighbors of node . Thus, weuse to split the incoming traffic to each destination basedon the degrees of the source and the destination.2) Wireless Setting: We generated a random network with

30 nodes, which resulted in the topology in Fig. 5. We used thefollowing procedure to generate the random network: 30 nodesare placed uniformly at random in a unit square; then startingwith a zero transmission range, the transmission range was in-creased till the network was connected. We assume that eachlink can transmit one packet per time-slot. We assume a 2-hopinterference model in our simulations. By a -hop interferencemodel, we mean a wireless network where a link activation si-lences all other links that are hops from the activated link. Thepacket arrival processes are generated using the same method asin the wireline case. We simulate two cases given the networktopology: the no coding case and the network coding case. Inboth wireline and wireless simulations, we chose in (13) tobe , and we use probabilistic splitting algorithm for simu-lations except in Fig. 12.

B. Simulation Results

1) Wireline Networks: First, we compare the performanceof three algorithms: the traditional back-pressure algorithm, thebasic shadow queue routing/scheduling algorithm without theextra link activation enhancement, and PARN. Without extralink activation, to ensure that the real arrival rate at each linkis less than the link capacity provided by the shadow algorithm,we choose . Fig. 6 shows delay as a function of thearrival rate lambda for the three algorithms. As can be seenfrom the figure, simply using a value of does not helpto reduce delays without extra link activation. The reason isthat while encourages the use of shortest paths, linkswith back-pressure less than will not be scheduled and thus

Fig. 6. Impact of the parameter and extra link activation in Sprint GMPLSnetwork topology.

Fig. 7. Delay performance of PARN and shortest path routing.

can contribute to additional delays. Because we exaggerate theshadow traffic by a factor of , the throughput region of the algo-rithmwithout extra link activation is smaller than the throughputregion of the traditional back-pressure algorithm.We also compare the delay performance of PARN with that

of the shortest path routing in Fig. 7. For each pair of sourceand destination, we find a shortest path between them by usingDijkstra’s algorithm. When the arrival rate , the dif-ference between the average packet delays of PARN and theshortest-path routing is very small. This implies that PARN canobtain similar delay performance as the shortest-path routing atlight traffic. However, the shortest-path routing can only achieveabout 60% of the capacity region of the network.Next, we study the impact of on the performance on

PARN. Fig. 8 shows the delay performance for various withextra link activation in the wireline network. The delays fordifferent values of (except ) are almost the same inthe light traffic region. Once is sufficiently larger than zero,extra link activation seems to play a bigger role, than the choiceof the value of , in reducing the average delays.The wireline simulations show the usefulness of the PARN

algorithm for adaptive routing. However, a wireline networkdoes not capture the scheduling aspects inherent to wireless net-works, which is studied next.2) Wireless Networks: In the case of wireless networks, even

with extra link activation, to ensure stability even when the ar-rival rates are within the capacity region, we need . We

Page 11: Back Pressure Based Packet by Packet Adaptive

254 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013

Fig. 8. Packet delay as a function of under PARN in Sprint GMPLS networktopology.

Fig. 9. Packet delay as a function of under PARN in the wireless networkunder 2-hop interference model without network coding.

chose in our simulations due to reasons mentioned inSection VI.In Fig. 9, we study wireless networks without network

coding. From the figure, we see that the delay performance isrelatively insensitive to the choice of as long as it is suffi-ciently greater than zero. However, does play an importantrole because it suppresses the search of long paths when thetraffic load is not high. Extra link activation can be used todecrease delays significantly for especially in lighttraffic region.In Figs. 10 and 11, we show the corresponding results for the

case where both adaptive routing and network coding are used.Comparing Figs. 9 and 10, we see that, when used in conjunctionwith adaptive routing, network coding can increase the capacityregion. We make the following observation regarding the case

in Fig. 11: In this case, no attempt is made to optimizerouting in the network. As a result, the delay performance isvery bad compared to the cases with (Fig. 10). In otherwords, network coding alone does not increase capacity suffi-ciently to overcome the effects of back-pressure routing. On theother hand, PARN with harnesses the power of networkcoding by selecting routes appropriately.Next, we make the following observation about network

coding. Comparing Figs. 10 and 11, we noticed that at moderateto high loads (but when the load is within the capacity region of

Fig. 10. Packet delay as a function of under PARN for in the wirelessnetwork under 2-hop interference model with network coding.

Fig. 11. Packet delay as a function of under PARN for in the wirelessnetwork under 2-hop interference model with network coding.

the no coding case), network coding increases delays slightly.We believe that this is due to fact that packets are stored inmultiple queues under network coding at each node: For eachnext-hop neighbor, a queue for each previous-hop neighbormust be maintained. This seems to result in slower convergenceof the routing table.Finally, we study the performance of the probabilistic split-

ting algorithm versus the token bucket algorithm. In our simula-tions, the token bucket algorithm runs significantly faster, by afactor of 2. The reason is that manymore calculations are neededfor the probabilistic splitting algorithm as compared to the tokenbucket algorithm. Thismay have some implications for practice.Thus, in Fig. 12, we compare the delay performance of the twoalgorithms. As can be seen from the figure, the token bucketand probabilistic splitting algorithms result in similar perfor-mance. Therefore, in practice, the token bucket algorithm maybe preferable.

IX. CONCLUSION

The back-pressure algorithm, while being throughput-op-timal, is not useful in practice for adaptive routing since thedelay performance can be really bad. In this paper, we havepresented an algorithm that routes packets on shortest hopswhen possible and decouples routing and scheduling using aprobabilistic splitting algorithm built on the concept of shadow

Page 12: Back Pressure Based Packet by Packet Adaptive

ATHANASOPOULOU et al.: BACK-PRESSURE-BASED PACKET-BY-PACKET ADAPTIVE ROUTING IN COMMUNICATION NETWORKS 255

Fig. 12. Comparison of probabilistic splitting and token bucket algorithmsunder PARN in the wireless network under 2-hop interference model withoutnetwork coding.

queues introduced in [3] and [5]. By maintaining a probabilisticrouting table that changes slowly over time, real packets donot have to explore long paths to improve throughput; thisfunctionality is performed by the shadow “packets.” Ouralgorithm also allows extra link activation to reduce delays.The algorithm has also been shown to reduce the queueingcomplexity at each node and can be extended to optimally tradeoff between routing and network coding.

APPENDIX ASTABILITY OF THE NETWORK UNDER PARN

Our stability result uses the result in [26] and relies on thefact that the arrival rate on each link is less than the availablecapacity of the link.We will now focus on the case of wireless networks without

network coding.All variables in this appendix are assumed to be average

values in the stationary regime of the corresponding variablesin the shadow process. Let denote the mean shadow trafficrate at link destined to . Let and denotethe mean service rate of link and the exogenous shadowtraffic arrival rate destined to at node . Notice that comesfrom our strategy on shadow traffic. The flow conservationequation is as follows:

(26)

The necessary condition on the stability of shadow queues is asfollows:

(27)

Since we know that the shadow queues are stable underthe shadow queue algorithm, the expression (27) should besatisfied.

Now we focus on the real traffic. Suppose the system has anequilibrium distribution, and let be the mean arrival rate ofreal traffic at link destined to . The splitting probabilitiesare expressed as follows:

where (28)

Thus, the mean arrival rates at a link satisfy traffic equation

(29)

where .The traffic intensity at link is expressed as

(30)

Now we will show for any link . Letfor every , and substitute it into ex-

pression (29). It is easy to check that the candidate solution isvalid by using expression (26). From (27), the traffic intensityat link is strictly less than 1 for any link

(31)

Thus, we have shown that the traffic intensity at each link isstrictly less than 1.The wireline network is a special case of a wireless network.

Substitute the link capacity for and set to be zero, andstability follows directly.The stability of wireless networks with network coding is

similar to the case of wireless network with no coding.

APPENDIX BBACK-PRESSURE ALGORITHM IN THE NETWORK CODING CASE

Given a set of packet arrival rates that lie in the capacity re-gion, our goal is to find routes for flows that use as few re-sources as possible. Thus, we formulate the following optimiza-tion problem for the network coding case:

s.t.

(32)

Page 13: Back Pressure Based Packet by Packet Adaptive

256 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013

Let and be the Lagrange multipliers corre-sponding to the flow conservation constraints in problem (32).Appending the constraints to the objective, we get

(33)

If the Lagrange multipliers are known, then the optimal canbe found by solving

(34)

where

Similar to the update algorithm of in (7), we can derive theupdate algorithm to compute :

(35)

By choosing to be the step-size parameter, looksvery much like a queue update equation. Replacing by

, we get (20)–(23). It can be shown using the results in [21]and [22] that the stochastic version of the above equations arestable and that the average rates can approximate the solution to(32) arbitrarily closely.

REFERENCES

[1] L. Tassiulas and A. Ephremides, “Stability properties of constrainedqueueing systems and scheduling policies for maximum throughput inmultihop radio networks,” IEEE Trans. Autom. Control, vol. 37, no.12, pp. 1936–1948, Dec. 1992.

[2] M. J. Neely, E. Modiano, and C. E. Rohrs, “Dynamic power allocationand routing for time varying wireless networks,” IEEE J. Sel. AreasCommun., vol. 23, no. 1, pp. 89–103, Jan. 2005.

[3] L. Bui, R. Srikant, and A. L. Stolyar, “Novel architectures and algo-rithms for delay reduction in back-pressure scheduling and routing,”in Proc. IEEE INFOCOM Mini-Conf., Apr. 2009, pp. 2936–2940.

[4] L. Bui, R. Srikant, and A. L. Stolyar, “A novel architecture for delay re-duction in the back-pressure scheduling algorithm,” IEEE/ACM Trans.Netw., vol. 19, no. 6, pp. 1597–1609, Dec. 2011.

[5] L. Bui, R. Srikant, and A. L. Stolyar, “Optimal resource allocation formulticast flows in multihop wireless networks,” Phil. Trans. Roy. Soc.,Ser. A, vol. 366, pp. 2059–2074, 2008.

[6] L. Ying, S. Shakkottai, and A. Reddy, “On combining shortest-pathand back-pressure routing over multihop wireless networks,” in Proc.IEEE INFOCOM, Apr. 2009, pp. 1674–1682.

[7] S. Katti, H. Rahul, W. Hu, D. Katabi, M. Medard, and J. Crowcroft,“XORs in the air: Practical wireless network coding,” Comput.Commun. Rev., vol. 36, pp. 243–254, 2006.

[8] M. Effros, T. Ho, and S. Kim, “A tiling approach to network code de-sign for wireless networks,” in Proc. Inf. Theory Workshop, 2006, pp.62–66.

[9] H. Seferoglu, A. Markopoulou, and U. Kozat, “Network coding-awarerate control and scheduling in wireless networks,” in Proc. ICME Spe-cial Session Netw. Coding Multimedia Streaming, Cancun, Mexico,Jun. 2009, pp. 1496–1499.

[10] S. B. S. Sengupta and S. Rayanchu, “An analysis of wireless networkcoding for unicast sessions: The case for coding-aware routing,” inProc. IEEE INFOCOM, Anchorage, AK, May 2007, pp. 1028–1036.

[11] T. Ho and H. Viswanathan, “Dynamic algorithms for multicast withintra-session network coding,” IEEE Trans. Inf. Theory, vol. 55, no. 2,pp. 797–815, Feb. 2009.

[12] A. Eryilmaz, D. S. Lun, and B. T. Swapna, “Control of multi-hop com-munication networks for inter-session network coding,” IEEE Trans.Inf. Theory, vol. 57, no. 2, pp. 1092–1110, Feb. 2011.

[13] L. Chen, T. Ho, S. H. Low, M. Chiang, and J. C. Doyle, “Optimizationbased rate control for multicast with network coding,” in Proc. IEEEINFOCOM, Anchorage, AK, May 2007, pp. 1163–1171.

[14] B. Awerbuch and T. Leighton, “A simple local-control approximationalgorithm formulticommodity flow,” inProc. 34th Annu. Symp. Found.Comput. Sci., 1993, pp. 459–468.

[15] A. Dimakis and J. Walrand, “Sufficient conditions for stability oflongest-queue-first scheduling: Second-order properties using fluidlimits,” Adv. Appl. Probab., vol. 38, no. 2, Jun. 2006.

[16] C. Joo, X. Lin, and N. B. Shroff, “Understanding the capacity regionof the greedy maximal scheduling algorithm in multi-hop wireless net-works,” in Proc. IEEE INFOCOM, 2008, pp. 1103–1111.

[17] A. Brzezinski, G. Zussman, and E. Modiano, “Enabling distributedthroughput maximization in wireless mesh networks: A partitioningapproach,” in Proc. ACM Mobicom, Sep. 2006, pp. 26–37.

[18] M. Leconte, J. Ni, and R. Srikant, “Improved bounds on the throughputefficiency of greedy maximal scheduling in wireless networks,” inProc. ACM MobiHoc, 2009, pp. 165–174.

[19] B. Li, C. Boyaci, and Y. Xia, “A refined performance characterizationof longest-queue-first policy in wireless networks,” in Proc. ACM Mo-biHoc, 2009, pp. 65–74.

[20] X. Lin and N. B. Shroff, “Joint rate control and scheduling in multihopwireless networks,” in Proc. 43rd IEEE Conf. Decision Control, Dec.14–17, 2004, vol. 2, pp. 1484–1489.

[21] M. J. Neely, E. Modiano, and C. Li, “Fairness and optimal stochasticcontrol for heterogeneous networks,” in Proc. IEEE INFOCOM, 2005,vol. 3, pp. 1723–1734.

Page 14: Back Pressure Based Packet by Packet Adaptive

ATHANASOPOULOU et al.: BACK-PRESSURE-BASED PACKET-BY-PACKET ADAPTIVE ROUTING IN COMMUNICATION NETWORKS 257

[22] A. L. Stolyar, “Maximizing queueing network utility subject to sta-bility: Greedy primal-dual algorithm,” Queueing Syst., vol. 50, no. 4,pp. 401–457, 2005.

[23] A. Eryilmaz and R. Srikant, “Fair resource allocation in wireless net-works using queue-length-based scheduling and congestion control,”in Proc. IEEE INFOCOM, 2005, vol. 3, pp. 1794–1803.

[24] A. Eryilmaz and R. Srikant, “Joint congestion control, routing andMAC for stability and fairness in wireless networks,” IEEE J. Sel.Areas Commun., vol. 24, no. 8, pp. 1514–1524, Aug. 2006.

[25] X. Lin, N. Shroff, and R. Srikant, “A tutorial on cross-layer optimiza-tion in wireless networks,” IEEE J. Sel. Areas Commun., vol. 24, no.8, pp. 1452–1463, Aug. 2006.

[26] M. Bramson, “Convergence to equilbria for fluid models of FIFOqueueing networks,” Queueing Syst. Theory Appl., vol. 22, pp. 5–45,1996.

[27] Sprint, Overland Park, KS, “Sprint IP network performance,” 2011[Online]. Available: https://www.sprint.net/performance/

Eleftheria Athanasopoulou (M’02) received theDiploma degree from the University of Patras, Pa-tras, Greece, in 2000, and the M.S. and Ph.D. degreesfrom the University of Illinois at Urbana–Champaignin 2002 and 2007, respectively, all in electrical andcomputer engineering.She was then a Post-Doctoral Research Associate

with the University of Illinois at Urbana-Cham-paign and the Coordinated Science Laboratory. Herresearch interests include wireline and wireless com-munication networks, stochastic models, discrete

event systems, and failure diagnosis of dynamic systems and networks.

Loc X. Bui (S’07–A’09) received the B.Eng. degreein electronics and telecommunications from the Postsand Telecommunications Institute of Technology, HoChi Minh City, Vietnam, in 2003, and the M.S. andPh.D. degrees in electrical and computer engineeringfrom the University of Illinois at Urbana–Champaignin 2006 and 2008, respectively.From October 2008 to March 2010, he was with

Airvana, Inc., Chelmsford, MA, where he was a Se-nior Software Engineer and then a Senior SustainingEngineer. From April 2010 to September 2011, he

was a Postdoctoral Scholar with the Department of Management Science andEngineering, Stanford University, Stanford, CA. From October 2011 to January2012, he was a Visiting Fellow with the Department of Electrical Engineering,Technion—Israel Institute of Technology, Haifa, Israel. He is currently a Lec-turer of electrical engineering with the School of Engineering, Tan Tao Univer-sity, LongAn, Vietnam. His research interests include communication networks,wireless communications, game theory, and machine learning.

Tianxiong Ji (M’07) received the B.Eng. and M.S.degrees in electrical engineering from TsinghuaUniversity, Beijing, China, in 2005 and 2007,respectively, and the Ph.D. degree in electrical andcomputer engineering from the University of Illinoisat Urbana–Champaign in 2012.He has been with Google, Inc., Mountain View,

CA, since December 2011. His research interests in-clude wireless networks, queuing theory, data centernetworks, and wireless communication.

R. Srikant (S’90–M’91–SM’01–F’06) receivedthe B.Tech. degree from the Indian Institute ofTechnology, Madras, India, in 1985, and the M.S.and Ph.D. degrees from the University of Illinois atUrbana–Champaign in 1988 and 1991, respectively,all in electrical engineering.He was a Member of Technical Staff with AT&T

Bell Laboratories, Murray Hill, NJ, from 1991 to1995. He is currently with the University of Illinois atUrbana–Champaign, where he is the Fredric G. andElizabeth H. Nearing Professor with the Department

of Electrical and Computer Engineering and a Research Professor with theCoordinated Science Laboratory. His research interests include communicationnetworks, stochastic processes, queueing theory, information theory, and gametheory.Prof. Srikant was an Associate Editor of Automatica, the IEEE

TRANSACTIONS ON AUTOMATIC CONTROL, and the IEEE/ACM TRANSACTIONSON NETWORKING. He has also served on the Editorial Boards of special issuesof the IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS and IEEETRANSACTIONS ON INFORMATION THEORY. He was the Chair of the 2002IEEE Computer Communications Workshop in Santa Fe, NM, and a ProgramCo-Chair of IEEE INFOCOM 2007.

Alexander Stolyar received Ph.D. degree in mathe-matics from the Institute of Control Sciences, USSRAcademy of Science, Moscow, Russia, in 1989.He is a Distinguished Member of Technical

Staff in the Industrial Mathematics and OperationsResearch Department of Bell Labs, Alcatel-Lucent,Murray Hill, NJ. Before joining Bell Labs in 1998, hewas with the Institute of Control Sciences, Moscow,Russia; Motorola, Arlington Heights, IL; and AT&TLabs–Research, Murray Hill, NJ. He is an AssociateEditor of Operations Research; Queueing Systems:

Theory and Applications; and Advances in Applied Probability. His researchinterests are in stochastic processes, queueing theory, and stochastic modelingof communication and service systems.