Vous êtes sur la page 1sur 4

BAYESIAN NODE LOCALISATION IN WIRELESS SENSOR NETWORKS

Mark R. Morelande, Bill Moran and Marcus Brazil


Department of Electrical and Electronic Engineering
The University of Melbourne, Australia
email: [m.morelande,b.moran]@ee.unimelb.edu.au, brazil@unimelb.edu.au
ABSTRACT
Node localisation in wireless sensor networks is a difcult problem
due to the large number of parameters to be estimated and the nonlinear relationship between the measurements and the parameters.
Assuming the presence of a number of anchor nodes with known
positions and a centralised architecture, a Bayesian algorithm for
node localisation in wireless sensor networks is proposed. The algorithm is a renement of an existing importance sampling method
referred to as progressive correction. A simulation analysis shows
that, with only a few anchor nodes, the proposed method is capable
of accurately localising a large number of nodes.
Keywords: Sensor networks, Localisation, Bayes procedures.
1. INTRODUCTION
Networks of large numbers of sensors are nding increased use
due to the availability of low cost, low power miniature devices
capable of sensing, computation and communication [9]. A prerequisite for use of a sensor network is knowledge of the sensor,
or node, positions. In a randomly deployed ad hoc network this
knowledge is not generally available since tting each node with a
positioning device, such as a global positioning system (GPS), is
too costly. Instead, it is customary to t a small number of sensors
with a GPS and then use these anchor nodes as the basis of a procedure for determining the positions of the remaining nodes. This
will be the approach taken here.
The information required to locate the nodes is obtained by
generating measurements between neighbouring nodes. These measurements could be the strength or time of arrival of a signal transmitted from one node to another [1, 9]. Alternatively, the radio inteferometric positioning system (RIPS) uses the phase difference
between signals simultaneously transmitted at a pair of nodes and
received at another pair of nodes to compute a sum of range differences between the four nodes [5]. In our examples we use RIPS
measurements because, in practice, they combine accuracy with a
large range.
It is assumed here that all measurements are sent to a central
node for processing. Node localisation from these measurements
is a difcult parameter estimation problem. Since the measurements are typically nonlinear functions of the node locations, maximum likelihood estimates can be found only by numerical optimisation [6]. Reasonably accurate initialisation is required to prevent
convergence to a local maximum.
In this paper the node localisation problem is approached in a
Bayesian framework. In this framework the optimal estimator, in
the mean square error sense, is the posterior mean. Since the posterior distribution, and hence the posterior mean, cannot be found
exactly, an approximation is necessary. This is a common problem in Bayesian estimation and many numerical methods of approximation have been proposed [4, 10]. Importance sampling,

1-4244-1484-9/08/$25.00 2008 IEEE

2545

in which random samples are drawn from an importance density


and then weighted, will be used here. The difculty of applying
importance sampling to this problem is that the dimension of the
sampling space is typically very large and the prior PDF for the
parameters is much more diffuse than the likelihood. These difculties are addressed by the use of progressive correction. Progressive correction is a multi-stage procedure in which samples are
obtained from a series of intermediate distributions which become
progressively closer to the posterior distribution [7]. The idea is
that the intermediate distributions used in the early stages should
be simpler to obtain samples from than the true posterior distribution. Annealed importance sampling, in which samples from the
importance density are drawn via a specially constructed Markov
chain, is a similar idea [8].
The progressive correction procedure as originally proposed
in [7] in the context of particle ltering is not suitable for estimating the large number of parameters which typically arise in the
node localisation problem. A generalisation of the procedure is
proposed which enables signicantly more accurate node localisation. Numerical simulations will be presented which demonstrate
that the proposed algorithm is able to accurately locate large numbers of nodes with moderate computational expense.
The paper is organised as follows. A mathematical model of
the problem is given in Section 2. The exact Bayesian solution is
discussed in Section 3 and approximation via progressive correction is developed in Section 4. A performance analysis is given in
Section 5.
2. MODELLING
Assume the network has K anchor nodes with the kth node having
position y k R2 . The number of nodes with unknown position
is denoted as M with the mth such node having position x m
R2 . Let y = [y 1 , . . . , y K ] and x = [x1 , . . . , xM ] denote the
collections of known and unknown node positions, respectively.
These nodes are to be localized from a vector of T measurements
d = [d1 , . . . , dT ] . The measurements could be distances between
nodes or the distance differences given by RIPS. Assuming that
the measurements are subject to additive Gaussian noise which is
independent for each measurement, the likelihood of the parameter
x given the measurements d can be written as
l(d|x) =

T
Y

N (dt ; ht (x, y), 2 ),

(1)

t=1

where N (z; , ) is the Gaussian probability density function


(PDF) with mean and covariance matrix evaluated at z. The
function ht is determined by the quantity being measured and the
nodes used in the generation of the tth measurement. For example,
the performance analysis of Section 5 uses RIPS measurements.

ICASSP 2008

In this case T is even and we dene the measurement function


t : {1, . . . , 4} {1, . . . , K + M }, t = 1, . . . , T /2 which
determines the nodes used to produce the (2t 1)th and 2tth measurements. Let
j
y t (j) ,
t (j) K,
(2)
z t,j =
xt (j)K , otherwise,
and t (i, j) = z t,i z t,j  where   is the Euclidean norm.
According to this notation, for t = 1, . . . , T /2 [5],
h2t1 (x, y) = t (1, 4) t (2, 4) + t (2, 3) t (1, 3),
h2t (x, y) = t (1, 4) t (3, 4) + t (2, 3) t (1, 2).

(3)
(4)

S = 1. The likelihood used for s < S is broader than the true


likelihood, particularly in the earlier stages, making it more probable that samples drawn from the diffuse prior will have a high
likelihood. In the later stages the likelihood approximation sharpens so that the samples gradually concentrate in the area of the
parameter space suggested by the true likelihood.
Let s (x) ls (d|x)0 (x), s = 1, . . . , S be the posterior
PDF of the node locations according to the sth likelihood. Note
that S = is the posterior PDF under the true likelihood. Progressive correction works by successivley drawing samples from
1 , 2 and so on up to S . Consider the sth stage of the procedure
and assume that the approximation
s1 (x)

3. BAYESIAN ESTIMATION OF THE NODE LOCATIONS


The Bayesian approach assumes that the vector parameter of interest is a random variable for which a prior distribution is available.
Thus, assume that x 0 . The key quantity in Bayesian estimation is the distribution of the random paramater conditional on
the measurements, i.e., the posterior distribution. Applying Bayes
rule gives the posterior PDF as
(x) l(d|x)0 (x).

(5)

n
X

ws1,i gs1 (x xs1,i ),

(8)

i=1

is available. In (8) gs1 is a kernel density [11]. It is desired to


draw samples from the sth intermediate distribution with PDF,
s (x) ls (x|d)0 (x) = l(d|x)s ls1 (d|x)0 (x)
l(d|x)s s1 (x).

(9)

Substituting (8) into (9) gives the approximation


s (x) C l(d|x)s

n
X

ws1,i gs1 (x xs1,i )

(10)

The minimum mean square error (MMSE) estimator of x is the


posterior mean,
Z
= E(x|d) = x (x) dx.
x
(6)

where C is a normalising constant. The mixture density (10) can


be re-written as, for i = 1, . . . , n,

The posterior PDF , and hence the posterior mean, cannot be


found exactly, so an approximation is required. Approximation of
(6) via importance sampling involves drawing samples of the parameter vector x from an importance density q and approximating
the integral (6) by a weighted sum of the samples [10],

where i is an index on the mixture. Sampling from (11) involves


drawing a parameter vector and a mixture index. It is proposed to
draw mixture indices and parameter samples from an importance
density qs of the form

n
X

wx,

(7)

i=1

where, for i = 1, . . . , n, x i q and wi = C l(d|xi )0 (xi )/q(xi )


with C such that the weights sum to one.
Sampling from any importance density which satises certain
regularity conditions will enable exact computation of the posterior mean as the sample size n . However, obtaining an accurate approximation with as few samples as possible requires careful selection of the importance density. The simplest candidate is
the prior, i.e., q = 0 . The weights are then w i = C l(d|xi ). Accurate approximation of the posterior mean requires several samples to attract a signicant weighting. Thus several samples drawn
from the prior need to be in regions of high likelihood. The probability of this happening is small if there is a large number of parameters and/or the prior distribution of the parameters is diffuse.
Both situations are likely in the node localisation problem. This
makes the prior an unsuitable choice of importance density.
4. IMPORTANCE SAMPLING VIA PROGRESSIVE
CORRECTION

i=1

s (x, i) C ws1,i l(d|x)s gs1 (x xs1,i )

qs (x, i) = s,i f s,i (x).


s,1

s,n

(11)

(12)

s,i

sum to one and f is a PDF. Sampling is


where , . . . ,
performed by drawing mixture indices from the distribution qs (i) =
s,i and then drawing parameters conditional on the mixture indices using qs (x|i) = f s,i (x). This procedure is performed sequentially for s = 1, . . . , S to obtain samples from the posterior
PDF which are used to approximate the posterior mean. The
complete procedure is summarised in Table 1. In this paper the
kernels g0 , . . . , gS1 are Gaussian PDFs with zero mean and covariance matrix selected as suggested in [11, Chapter 4]. Two possibilities for qs will be considered in the following subsections.
4.1. Blind progressive correction (B-PC)
The progressive correction of [7] involves setting qs = s1 , so
that s,i = ws1,i and f s,i (x) = gs1 (xxs1,i ). The weights
in Step 3(d) can be found as w s,i = C l(d|xs,i )s . This procedure
will be referred to as blind progressive correction (B-PC) since
samples from the (s 1)th stage are selected without considering
the rened likelihood.
4.2. Measurement-directed progressive correction (MD-PC)

Progressive correction is implemented in S stages using likelihoods at each stage which become progressively closer to the true
likelihood. Let ls , s = 1, . . . , S denote the likelihood used for the
sth stage with lS = l. The likelihood P
at the sth stage takes the
form ls (d|x) = l(d|x)s where s = sj=1 j , j (0, 1] and

2546

The second progressive correction variant seeks to use only those


samples which are favoured by the rened likelihood. This approach is motivated by the success of measurement-directed sampling techniques in particle ltering [3].The kernel gs1 is Gaussian with zero-mean and covariance matrix s1 , i.e., gs1 (x) =

applied with n = 100 samples and S = 6 stages are used. Samples from the intermediate posterior distributions s , s = 0, 2, 4, 6
are shown in Figures 1 and 2 for B-PC and MD-PC, respectively.
The samples generated by B-PC do not provide an accurate characterisation of the posterior distribution and the resulting location
estimates are poor. By contrast, the samples generated by MD-PC
gradually congregate about the true node positions as the likelihood sharpens. At the expense of additional computations, B-PC
could be improved by increasing S and/or n.

1. Select 1 , . . . , S .
2. For i = 1, . . . , n, draw x 0,i 0 and set w0,i = 1/n.
3. For s = 1, . . . , S:
(a) Compute the selection weights s,1 , . . . , s,n .
(b) Select indices j s,1 , . . . , j s,n such that P(j s,i = k) =
s,k .
.

(d) For i = 1, . . . , n, compute the weights


s,i

l(d|xs,i )s gs1 (xs,i xs1,j


qs (xs,i , j s,i )

i=1

S,i

S,i

50

=Dw

l(d|x, xs1,i )s gs1 (x xs1,i )


f s,i (x) = Z

l(d|, xs1,i )s gs1 ( xs1,i ) d

= N (x; x

s,i

,P

s,i

(13)

(14)

x
P

s,i
s,i

=x

s1,i

+K

= s1 K

s,i

s,i

s,i ),
(d d

H (x

s1,i

200

200

150
100

(19)

with K s,i = s1 H (xs1,i ) (S s,i )1 . The use of (14) and (15)


will be referred to as measurement-directed progressive correction
(MD-PC). The sample weights in Step 3(d) of Table 1 are
ws,i = C [l(d|xs,i )/
l(d|xs,i , xs1,j

s,i

)]s

(20)

4.3. Example
The operation of the progressive correction schemes is demonstrated below using a simple example with K = 3 anchor nodes
and M = 4 unknown nodes. Node localisation is performed using
T = 8 measurements generated via RIPS. B-PC and MD-PC are

2547

200

250

200

250

150
100

0
50

100
150
xposition

200

250

50

(c)

100
150
xposition

(d)

Figure 1: Localisation using B-PC. Samples from s are shown


for (a) s = 0, (b) s = 2, (c) s = 4 and (d) s = 6.

250

250

200

200

150
100

150
100
50
0

(17)

100
150
xposition

50

(18)

)s1 ,

50

(b)

50

(16)

S s,i = H (xs1,i )s1 H (xs1,i ) + 2 I T /s ,

250

50

100
150
xposition

200

250

50

(a)

(15)

= h(xs1,i , y),

250

250

yposition

200

where D is a normalising constant and


s,i

100
150
xposition

50

yposition

s,i , S s,i )
N (d; d

50

(a)

where H (
x) = x  h(x, y)|x=x . The weights and parameter
sampling densities are
Z
s,i ws1,i
l(d|x, xs1,i )s gs1 (x xs1,i ) dx
s1,i

100

0
0

N (x; 0, s1 ). Let h(x, y) = [h1 (x, y), . . . , hT (x, y)] . The


given the measurelinearised likelihood of x about the point x
ments d is

) = N (d; h(
), 2 I T )
l(d|x, x
x, y) + H (
x)(x x

150

50

yposition

=
4. Compute the parameter estimate x

n
P

s,i

100

yposition

=C

ws1,j

200

150

100
150
xposition

200

250

200

250

(b)

250

250

200

200
yposition

s,i

250

200

yposition

s,i

yposition

(c) For i = 1, . . . , n, draw x s,i f s,j

250

yposition

Table 1: Importance sampling with progressive correction.

150
100
50

150
100
50

0
0

50

100
150
xposition

(c)

200

250

50

100
150
xposition

(d)

Figure 2: Localisation using MD-PC. Samples from s are shown


for (a) s = 0, (b) s = 2, (c) s = 4 and (d) s = 6.

4.4. Discussion
The performance of the progressive correction schemes depends
somewhat on the number S of steps and selection of the expansion
factors 1 , . . . , S . The adaptive scheme proposed in [7] can be
used to select both the number and size of the corrections.
A natural prior density for the node localisation problem can
be obtained by considering the connectivity of the unknown nodes

with the anchor nodes. Let Vk , k = 1, . . . , K denote the region


of space within communication range of the kth anchor node. For
m = 1, . . . , M , let Cm = {k {1, . . . , K}|x m Vk } denote
the set of anchor nodes which can communicate with the mth unm = {1, . . . , K} Cm . Let UA denote the
known node and C
uniform PDF over the region A. ThenTthe prior PDFSfor the mth
unknown node is UWm where Wm = kCm Vk jCm Vj .

Table 3: RMS position error of the Bayesian estimator computed


with MD-PC.
Sample size
M
CRB
200
500 1000
50
0.73 0.60 0.55
0.49
0.40
100 0.55 0.51 0.45

6. CONCLUSIONS

5. PERFORMANCE ANALYSIS
A performance analysis has been conducted using Monte Carlo
simulations. The basic scenario consists of four anchor nodes
placed at the vertices of the square bounding the region [0, 250]2
and unknown nodes randomly distributed in the region [10, 240]2 .
Measurements are generated according to the RIPS, as described
in Section 2. The variance of the additive noise in the RIPS measurements is set to = 0.1m2 . The Cramer-Rao bound (CRB),
derived under the assumption that the node locations are deterministic parameters, is used as a benchmark.
In principle, RIPS measurements can be generated between
all quartets of nodes which are within communication range of
each other. This is impractical in large networks. The following
is a brief summary of the method we use to select quartets which
are used to generated RIPS measurements. First, the nodes are
grouped into cliques such that all nodes in a clique can communicate. Second, three nodes in each clique are selected as pseudoanchors. Third, measurements are generated between all node
quartets which are composed of nodes in the same clique and include the selected pseudo-anchors. The procedure is similar to that
of [2] but the method used here forms a larger number of cliques
and consequently generates a larger number of measurements.
We rst compare B-PC with MD-PC for M = 10 and M =
20 unknown nodes. The communication range of the motes is set
to 220m to ensure idenitiablity of the relatively small number of
unknown nodes. Algorithm performance is measured by the RMS
position error averaged over the nodes using 100 Monte Carlo realisations. The results for both progressive correction schemes are
given in Table 2. When MD-PC is used a sample size of 200 is
sufcient to give performance close to the CRB for M = 10 and
M = 20. B-PC does not provide accuracy close to the CRB for
either value of M even when 2000 samples are used.

Table 2: RMS position error of the Bayesian estimator computed


with B-PC and MD-PC.
B-PC
MD-PC
M
CRB
200
500 1000 2000
200
10 5.91 3.63 2.71
1.62
0.52
0.54
20 3.91 3.05 2.69
2.31
0.35
0.30

In the second set of experiments we consider larger numbers


of unknown nodes and a smaller communication range. Results
are given only for MD-PC as B-PC does not provide useful estimates for any reasonable sample size in these scenarios. RMS
position errors averaged over 100 realisations are shown in Table
3 for M = 50 and 100 and a communication range of 150m. Even
with relatively small samples sizes, the algorithm performance is
close to the CRB for both values of M .

2548

A Bayesian algorithm for localisation of sensors in a sensor network has been proposed. Assuming a centralised architecture, an
importance sampling technique was used to approximate the optimal Bayesian estimator. The proposed algorithm is a renement
of an existing method referred to as progressive correction. Numerical simulations showed that the renement offers considerable
improvement in performance and enables accurate localisation of
a large number of nodes. Ongoing work includes adapting the
algorithm for distributed estimation and examining identiability
issues.
7. REFERENCES
[1] P. Biswas, T.-C. Liang, K.-C. Toh, Y. Ye, and T.-C. Wang,
Semidenite programming approaches for sensor network
localization with noisy distance measurements, IEEE Transactions on Automation Science and Engineering, vol. 3,
no. 4, pp. 360371, 2006.
[2] M. Brazil, M. Morelande, B. Moran, and D. Thomas, Distributed self-localisation in sensor networks using RIPS measurements, in Proceedings of the International Conference
on Wireless Networks, London, England, 2007.
[3] A. Doucet, S. Godsill, and C. Andrieu, On sequential Monte
Carlo sampling methods for Bayesian ltering, Statistics
and Computing, vol. 10, pp. 197208, 2000.
[4] J. Gentle, Elements of Computational Statistics. Springer,
2002.
[5] B. Kusy, A. Ledeczi, M. Maroti, and L. Meertens, Nodedensity independent localization, in Proceedings of the International Conference on Information Processing in Sensor
Networks, 2006, pp. 441448.
[6] R. Moses, D. Krishnamurthy, and R. Patterson, A selflocalization method for wireless sensor networks, EURASIP
Journal on Applied Signal Processing, vol. 4, pp. 348358,
2003.
[7] C. Musso, N. Oudjane, and F. Le Gland, Improving regularised particle lters, in Sequential Monte Carlo Methods
in Practice, A. Doucet, N. de Freitas, and N. Gordon, Eds.
New York: Springer-Verlag, 2001.
[8] R. Neal, Annealed importance sampling, Statistics and
Computing, vol. 11, pp. 125139, 2001.
[9] N. Patwari, J. Ash, S. Kyperountas, A. Hero, R. Moses, and
N. Correal, Locating the nodes, IEEE Signal Processing
Magazine, vol. 22, no. 4, pp. 5469, 2005.
[10] C. Robert and G. Casella, Monte Carlo Statistical Methods.
New York: Springer-Verlag, 1999.
[11] B. Silverman, Density Estimation for Statistics and Data
Analysis. Chapman and Hall, 1986.

Vous aimerez peut-être aussi