Vous êtes sur la page 1sur 5

Distributed Scheduling and Active Queue

Management in Wireless Networks


Peter Marbach
Department of Computer Science
University of Toronto
Email: marbach@cs.toronto.edu

Abstract—We propose a distributed scheduling and active II. P ROPERTIES OF A FAIR S CHEDULER
queue management mechanism for wireless ad hoc networks.
The approach is based on a random access scheduler where the Consider a single-cell ad hoc network where a set of N
transmission attempt probabilities depend on the local backlog. nodes are within transmission range of each other and thus
The resulting mechanism is simple and can be implemented in share a common communication channel. If a node transmits
a distributed fashion. The performance of the resulting protocol a packet, then this transmission is successful if it does not
can be modelled as a utility maximization problem to establish
overlap with a transmission by another node in the network.
that it indeed leads to a high throughput and fair bandwidth
allocation. If a packet transmission collides (overlaps) with transmission
of another node, then the packet is lost and needs to be
retransmitted.
I. I NTRODUCTION
Suppose that each node has a single buffer. Let λn be the
Currently, there is considerable interests the design of packet arrival rate to the buffer at node n; let Dn be the
distributed scheduling algorithm for wireless networks which expected delay of a packet at node n, and let Pn be the
maximize network throughput subject to given fairness con- probability that a packet is dropped at node n due to a buffer-
straints. In this paper, we propose and analyze a scheduling overflow. For
XN
mechanism based on a random access protocol with active λ= λn ,
queue management, where the probability that a node makes n=1
a transmission attempt depends on the local backlog. The re-
sulting mechanism is simple and can easily been implemented let X(λ) be the network throughput under the network arrival
in a distributed manner. The performance of the proposed rate λ. We are interested in schedulers with the following
mechanism can be modelled by a utility maximization problem property.
to establish that it indeed leads to high throughput and a fair Property 1: For a single-cell wireless network consisting
bandwidth allocation. of nodes n = 1, ..., N , we say that a scheduler implements a
distributed buffer with service rate µ if the following is true.
Related to our approach, in [1] the IEEE 802.11 MAC
protocol was combined with an active queue management (a) The expected delay Dn is identical at all nodes, i.e. we
scheme called Neighborhood RED (NRED) to improve the have Dn = D, n = 1, ..., N .
fairness among TCP flows. NRED uses the channel utilization (b) The packet-drop probability Pn is identical at all nodes,
to estimate the total backlog in an interference region and to i.e. we have Pn = P , n = 1, ..., N .
determine a packet-drop probabilities. As pointed out in [1], (c) The throughput X(λ) is an non-decreasing function in
NRED is not guaranteed to accurately track the actual backlog λ with limλ→∞ X(λ) = µ.
in an interference region and no performance guarantees can The above property states that a fair scheduler should serve
be given. The approach presented here is able to overcome this packets as if the network traffic shares a common buffer that
problem by replacing IEEE 802.11 protocol with a CSMA is served at rate µ, i.e. all packets entering the network should
mechanism with a backlog-dependent packet transmission experience the same expected delay and the same probability
probabilities. More recently, scheduling and bandwidth alloca- of being dropped.
tion mechanisms have been obtained by the means of solving
a utility maximization problem (see for example [3], [4], [5]. A. Centralized Scheduler
Active queue management arises in this context naturally in We first consider a centralized scheduler that satisfies Prop-
the form of Lagrange multipliers that enforce the rate and erty 1. We assume that the scheduler has perfect information
scheduling constraints. This approach is very elegant and lucid, about the backlog at each node, but does not have any
but has the drawback that it leads to solutions that tend to be knowledge about the packet arrival rates.
too complex to be implemented in practice. Algorithm 1: Consider a single-cell wireless network with
Due to space constraints, we state our results without proofs. N nodes. If at least one buffer has a packet ready to be
A preliminary version of the paper has been presented in [2]. transmitted and there is currently no packet being transmitted,
then initiate a new transmission by scheduling node n with making a transmission attempt. Below we characterize the
probability throughput under this algorithm.
bn Suppose that the current backlog at node n is equal to bn
qn = , n = 1, ..., N,
B such that node n makes transmission attempts with probability
where bn , n = 1, ..., N , is the current backlog at node n and qbn . The expected number of transmission attempts after an
idle slot is then given by
N
X
B= bn G = qB (1)
n=1 PN
where B = n=1 bn is the total backlog over all nodes. We
is the total backlog over all nodes. If node n is scheduled, will also refer to G as the offered load.
then it will transmit the packet at the head of its local queue. Assuming that q is small, it is well-known that the instan-
The above algorithm schedules nodes proportionally to their taneous throughput (in packets per unit time) is a function of
current backlog, hence nodes with a high arrival rate (and a the expected number of transmission attempts G and is given
higher backlog) tend to be scheduled more often resulting. We by (see for example [6])
have the following result.
Ge−G
Lemma 1: Consider a single-cell wireless network where X(G) = , G ≥ 0,
each node has an infinite buffer, and suppose that packets Li + (1 − e−G )Lp
arrive to node n according to an independent Poisson process where Lp is the average duration of a packet transmission.
with rate λn , and that packet service times are independently In the following, we assume that Lp = 1 and the instanta-
and exponentially distributed with mean µ1 . Then Algorithm 1 neous throughput X(G) is given by
implements a distributed buffer with rate µ, i.e. the expected
Ge−G
delay D is equal to thePNexpected delay at a M/M/1 queue X(G) = , G ≥ 0, (2)
with arrival rate λ = n=1 λn and service rate µ. Li + (1 − e−G )
Lemma 1 states that when the packet arrival process is Poisson One can show that there exists an optimal offered load G+
and the service rates are exponentially distributed, then the which maximizes the throughput X(G) and that the through-
above scheduler satisfies Property 1. put X(G) becomes small as G becomes large. It is well-know
that the optimal value G+ is given by
B. Distributed Scheduler p
G+ = 2Li ,
The above centralized algorithm suggests that the proba-
bility that a node is scheduled should depend on the cur- and that
rent backlog at this node. Using this insight, we consider lim X(G+ ) = 1,
Li →0
a distributed algorithm which implements a scheduler with
backlog-dependent transmission probabilities. i.e. if the duration of an idle slot Li becomes small the
Algorithm 2: Let q, 0 < q < 1, be a constant which is throughput X(G+ ) is equal to the optimal throughput 1 (see
assumed to be small. Each node n, n = 1, ..., N uses the for example [6]).
following algorithm to schedule its packet transmissions. III. ACTIVE Q UEUE M ANAGEMENT
1) Channel Sensing: Before each transmission attempt,
The performance (in terms of throughput) of Algorithm 2
node n senses whether the channel is idle (no other
depends on the offered load G. In order to achieve a high
node is currently transmitting). If the channel has been
throughput, we use an active queue management mechanism
sensed to be idle for a duration Li time units, then
that randomly drops incoming packets in order to keep the
the node makes a transmission attempt with probability
expected number of transmission attempts G at the desired
qn = min{1 − ǫ, qbn }, where bn is the current backlog
level G∗ . We let the probability that a new packet is dropped
at node n and ǫ > 0 is a small constant to ensure that
(called the packet-drop probability p(u)) depend on a conges-
the attempt probability is strictly smaller than 1.
tion signal u.
2) Transmission: After finishing its transmission, node n
waits for an ACK from the receiver. If no such ACK A. The Basic Mechanisms
is obtained within a fixed period of time, then the node Consider the packet-drop probability p(u) given by
assumes that a collision happened and the packet needs 
to be retransmitted. If an ACK is obtained, the packet κu, 0 ≤ u ≤ 1/κ,
p(u) =
has been transmitted successfully and is removed from 1, u > 1/κ,
the buffer. where κ > 0 characterizes the slope of the of the function
We will refer to q as the transmission attempt constant. p(u). The congestion signal u is computed as follows: after
The above algorithm implements a CSMA protocol [6] with each idle period the signal u is additively decreased by a
backlog-dependent transmission probabilities: the larger the constant α > 0 and after each busy period the signal u is
backlog at a node, the more aggressive a node will be in additively increased by a constant β > 0. Note that this rule
follows the intuition that the congestion signal u should be m ∈ M, let wm (t) be the window size (in terms of packets)
increased when the channel is busy, and be decreased when of connection m during time slot t, t ≥ 0, and let Dm be the
the channel is idle. equilibrium round trip delay. Furthermore, let
One can show that the probability Pb that at least one node
xm (t) = wm (t)/Dm , (6)
makes a transmission attempt after an idle slot is given by
Pb = 1 − e−G (see for example [6]). The expected change ∆u be the transmission rate (in terms of packets per time slot)
in the signal u between two idle periods of length Li is then of connection m. TCP Reno uses packet loss as a congestion
equal to indicator, where window size wm is increased by w1m for each
acknowledged packet and halved for each packet that is not
∆u = −α(1 − Pb ) + (−α + β)Pb acknowledged. Ignoring delay in the exchange of congestion
= −α + βPb = −α + β(1 − e−G ). signals between nodes, the expected change in the window
size wm is then given by
Let G∗ be given by
1 − p(U (t))) 1
β  xm (t) − xm (t)p(U (t)) wm (t), (7)
G∗ = ln . (3) wm (t) 2
β−α
PN
Note that for G = G∗ , the expected change in the conges- where U = n=1 un (t) is the aggregated congestion signal
tion signal is equal to 0, i.e. we have as defined in Section III.

As shown in [7], the expected rate of change in the
−α + β(1 − e−G ) = 0. transmission rate λm at time t is given by
+
Furthermore, it can be shown that if the offered load G is

1 − p(U (t)) 1 2
smaller than G∗ then the congestion signal u tends to decrease xm (t + 1) = xm (t) + 2
− p(U (t))xm (t) .
Dm 2
(and hence the packet-drop probability tends to decrease),
To characterize the performance, we consider the operating
whereas for G > G∗ the congestion signal u tends to increase
point of the above system, i.e. the values of x∗m , m = 1, ..., M ,
(and hence the packet-drop probability tends to increase). It
and U ∗ such that
follows that G∗ is the unique operating point and the above
1 − p(U ∗ ) 1
active queue management mechanism will stabilize the offered
2
− p(U ∗ )(x∗m )2 = 0,
load at G∗ . Dm 2
Eq. (3) provides a simple way for setting the system and
M
throughput. Suppose that we wish to set the rate of the virtual X
x∗m = X(G∗ )/2
buffer equal to X(G∗ ), 0 < G∗ ≤ G+ , and the system backlog
m=1
equal to B ∗ . This can be achieved by choosing β > 0 and set
α equal to where X(G ) is the throughput under the offered load G∗


α = β(1 − e−G ). (4) as characterized in Section II-B. The factor 2 in the above
constraint on the total transmission rate accounts for the fact
In addition, using the relation G = qB given by Eq. (1), we that each TCP connection consists of two flows: the flow of
can also set the system backlog at a desired level B ∗ by setting data packets from the source to destination and the flow of
q equal to ACK’s from the destination to the source. As a result, the total
G∗ bandwidth used by the TCP connections is twice the sum of
q = ∗. (5)
B the transmission rates.
We have the following result which states that the above For a single-cell network, one can show that under the
distributed scheduling and active queue management algorithm above active queue management and scheduling scheme all
satisfy Property 1. TCP sessions have the same round-trip time D and the above
Lemma 2: Consider a single-cell wireless network and sup- optimization problem is given by (see also [7])
pose that packets arrive to node n according to an independent M √
xm D2
 
Poisson process with rate λn . Then the above active queue
X 2
max arctan
management mechanism, together with the scheduling Algo- m=1
D 2
rithm 2, implements a distributed buffer with rate X(G∗ ). M
X
s.t. xm ≤ X(G∗ )/2,
IV. TCP P ERFORMANCE IN A S INGLE -C ELL N ETWORK m=1
In this section, we study the interaction of the above dis- xm ≥ 0, m = 1, ..., M.
tributed active queue management and scheduling mechanism
The optimal solution to this optimization problem is given by
with TCP Reno rate control in single-cell wireless network.
For our analysis, we model the above network using the same X(G∗ )
x∗m = , m = 1, ..., M,
approach that was used by Kelly in [7] to model TCP Reno 2M
in wireline networks. Suppose that a set M = {1, ..., M } indeed resulting in a fair bandwidth allocation among the TCP
connections share single-cell wireless network. For connection connections.
V. S CHEDULING AND ACTIVE Q UEUE M ANAGEMENT IN A to be retransmitted. If an ACK is obtained, the packet
M ULTIHOP N ETWORK has been transmitted successfully and is removed from
In this section, we extend the above mechanism to multihop the buffer.
networks. To do that, we have to extend the notion of a C. Distributed Active Queue Management
“distributed buffer” to the context of a multihop network, as
well as adapt the active queue management mechanism of To set the packet-drop probabilities, each node follows the
Section III to account for interference between nodes that are following algorithm.
not within transmission range of each other. Algorithm 4: Each node n, n = 1, ..., N , keeps a list of the
congestion signals of the nodes in its interference region Hn .
A. Interference Region • Computation of Congestion Signals: After each idle pe-
For a multihop network, we associate a distributed buffer riod of length Li , nodes n decreases its signal un by a
with the interference region of each node: the packet arrival factor α > 0. After each busy period of length Lp , node
rate to this buffer is equal to the sum of the packet arrival rates n increases its congestion signal un by a factor β > 0.
over all nodes in the interference region, and the queue size • Exchange of Congestion Signals: Whenever a node trans-
of the buffer is equal to the sum of the backlog over all nodes mits a packet, it piggybacks its congestion signal un , as
in the interference region. The interference region includes all well as the congestion signals of all its 1-hop neighbors
1-hop neighbors of the node, i.e. all nodes within transmission on the packet transmission.
range of the given node. In addition, the interference region • Collection of Congestion Signals: Whenever a node over-
includes the 2-hop neighbors which indirectly interfere with hears a successful transmission, it uses the obtained
the node: when a 2-hop neighbor is transmitting then it will congestion signal to update the congestion signal of the
prevent the node from making a transmission attempt as this nodes in its interference region.
will be detected as a collision at their common neighbor. • Packet-Drop Probability: Each node forms an aggregated
Definition 1: The interference region Hn of node n consists congestion signal
of the node itself plus all its 2-hop neighbors. X
In the following, we assume that each node can sense Un = un′
n′ ∈Hn
whether a node in its interference region is currently trans-
mitting. This could be achieved through the use of a busy and drops incoming packets with probability p(Un ),
tone. Whenever a node senses a transmission by a node in its where the function p(Un ) for the packet-drop probability
1-hop neighborhood (transmission range) then it starts sending is as given in Section III.
a busy tone signal in a frequency band that is separate from Let Gn be the offered load in the interference region of
the packet transmission’s. When a node senses the channel node n (expected number of transmission attempts after an
to be busy (by a transmission in its 1-hop neighborhood) or idle slot in interference region of node n) and let
detects a busy tone (triggered by a transmission in its 2-hop  
β
neighborhood), then it will not make a transmission attempt, G∗ = ln .
thus avoiding a potential collision. β−α
The same analysis as given in Section III shows that when
B. Scheduling
Gn < G∗ , then node n will decrease its congestion signal un ,
Using the busy tone, a node will sense an idle channel only and vice versa. Thus, each node tries to stabilize the expected
if all nodes in its interference region are idle. If the channel number of transmission attempts Gn in its interference region
is idle, the following algorithm is used to schedule a packet at G∗ . In order to achieve this, all nodes in the interference
transmission. region of node n should react to the congestion signal un , i.e.
Algorithm 3: Let q, 0 < q < 1 be a constant which is the packet-drop probability of a node n′ in the interference
assumed to be small. Each node n, n = 1, ..., N uses the region of node n needs to include un . This is the reason why
following algorithm to schedule its transmission. nodes need to know all congestion signals in their interference
1) Channel Sensing: Before each transmission attempt, neighborhood.
node n senses whether the channel is idle (no other node
makes a transmission). If the channel has been idle for D. Asymptotic Throughput Analysis
a duration Li time units, then the node makes a trans- When node n stabilizes the offered load Gn in its interfer-
mission attempt with probability qn = min{1 − ǫ, qbn }, ence region at G∗ , the fraction of time Tn that exactly one node
where bn is the current backlog at node n and ǫ > 0 is n′ ∈ Hn transmits a packet is give by (see also Section II-B)
a small constant to ensure that the attempt probability ∗
G∗ e−G Lp
is strictly smaller than 1. Tn = .
2) Transmission: After finishing its transmission, node n Li + (1 − e−G∗ )Lp
waits for an ACK from the receiver. If no such ACK Note however that Tn is not equal to the fraction of time that
is obtained within a fixed period of time, then the node a node n′ ∈ Hn makes successfully transmits a packet as (a)
assumes that a collision happened and the packet needs a transmission by a node n′ ∈ Hn can collide with a packet
transmission by a node outside the interference region of node piggybacked on data packets, then the performance is given
n and (b) it is possible for two (or more) nodes in the 2-hop by the following optimization problem.
neighborhood of node n to simultaneously transmit packets M √  2

without causing a collision if the transmission do not result
X 2 xm Dm
max arctan
in a collision at the destination nodes. We have the following m=1
Dm 2
result for the throughput Xn (G∗ ) at node n under the offered X
s.t. xm ≤ 1, n = 1, ..., N,
load G∗ .
m∈A(n)
Lemma 3: Let Nn be the number of nodes in the interfer-
xm ≥ 0, m = 1, ..., M.
ence region of node n. Then we have
∗ 1 − G∗ e−G
∗ The is result states that the above scheduling and active queue
X(G∗ )e−G ≤ Xn (G∗ ) ≤ X(G∗ ) + Nn . management mechanism can (asymptotically) be modelled as
Li ...
a utility maximization problem. Moreover, the mechanism is
The lowerbound in the above lemma accounts for the fact that asymptotically optimal under the interference model given
a transmission by a node n′ ∈ Hn can potentially collide with by Definition 1 in the sense that the capacity constraint for
a transmission with a node outside the interference region of each interference region is equal to theoretical optimal value

node n (and e−G is the probability that this is not the case). throughput 1.
The upperbound assumes that all simultaneous transmissions
in Nn result in a successful transmissions. Note that the VII. C ONCLUSIONS
upperbound is not tight as it is based on a very optimistic We presented a distributed scheduling and active queue
assumption. management scheme and provided analytical and experimental
In general, the throughput Xn (G∗ ) depends on the actual results to show that it leads to an efficient and fair bandwidth
network topology; however, we have the following asymptotic allocation. Compared with the IEEE 802.11 protocol, the
result. √ proposed scheduling mechanism only requires a redefinition of
Lemma 4: For G∗ = G+ = 2Li we have the transmission probabilities at individual nodes. This could
be done by redefining the contention window size (CW) of
lim Xn (G∗ ) = 1.
Li →0 the current 802.11 protocol, which only requires changes in
The above lemma states that in the limit, when the duration the software but not hardware.
Li of an idle slot is negligible small compared with the For our analysis, the interference region (and the capacity
duration of a packet transmission, then throughput Xn (G+ ) constraint in the optimization problem given by Eq. (8) is
in the interference region of node n is equal to 1. This result given by the top-hop neighborhood of a node. This definition
is quite remarkable as it implies that the above distributed allows a simple implementation as collisions can be detected
scheduling and active queue management mechanism can using a busy tone. However, the definition is not efficient from
(asymptotically as Li becomes small) achieves the theoretical a performance point of view as it suffers from the exposed-
optimal throughput of 1. terminal problem. Future work is to investigate approaches to
avoid this problem by improving the channel feedback.
VI. TCP P ERFORMANCE IN A M ULTIHOP N ETWORK
R EFERENCES
Consider a multihop networks consisting of the set N =
{1, ..., N } of nodes and the set M = {1, ..., M } of TCP [1] K. Xu, M.Gerla, L. Qi and Y. Shu, “Enhancing TCP Fairness in Ad Hoc
Wireless Networks Using Neighborhood RED,” in Proceedings of ACM
connections. Let A(n) be the set of TCP connections m ∈ M MOBICOM, San Diego, September 2003.
that pass through node n ∈ N . [2] P. Marbach and Y. Lu, “Active Queue Management and Scheduling for
Using Lemma 4, the model of Section IV can be extended Wireless Networks: The Single-Cell Case”, in Proceedings of Conference
on Information Sciences and System (CISS), Princeton, March 2006.
to the multihop case, i.e. in the limit as Li becomes very [3] L. Chen, S. Low, M. Chiang, and J. Doyle, “Jointly Optimal Congestion
small, the throughput of individual TCP connections under the Control, Routing, and Scheduling for Wireless Ad Hoc Networks, in
above active queue management and scheduling mechanism is Proceedings of IEEE INFOCOM, Barcelon, April 2006.
[4] Y. Xue, B. Li, and K, Nahrstedt, “Optimal Resource Allocation in
modelled by the following optimization problem, Wireless Ad Hoc Networks: A Price-based Approach,” IEEE Transactions
M √  2
 on Mobile Computing, vol. 6, no. 2, pp. 961-970, December 2005.
X 2 xm Dm [5] X. Lin and N. B. Shroff, ”The Impact of Imperfect Scheduling on
max arctan (8) Cross-Layer Congestion Control in Wireless Networks,” IEEE/ACM
m=1
Dm 2
Transactions on Networking, vol. 14, no. 2, pp. 302-315, April 2006.
G. Holland and N. Vaidya, “Analysis of TCP Performance over Mobile
X
s.t. xm ≤ 1/2, n = 1, ..., N, (9) Ad Hoc Networks,” In Proceedings of ACM MOBICOM, 1999.
m∈A(n) [6] D. Bertsekas and R. Gallager, Data Networks, 2nd ed. Prentice-Hall,
xm ≥ 0, m = 1, ..., M. (10) Inc., 1992.
[7] F. Kelly, “Mathematical modelling of the Internet,” Fourth International
Note that the capacity constraint accounts for the fact that Congress on Industrial and Applied Mathematics, pp. 105–116, July 1999.
each TCP connection consists of two flows: the flow of data
packets from source to destination and the flow of ACKs from
the destination to the source. If we assume that ACKs are

Vous aimerez peut-être aussi