1 \documentclass{svmult}%
2 %\documentclass[]{llncs}
9 \usepackage{algorithmic}
11 %\setcounter{MaxMatrixCols}{30}
12 %\newtheorem{theorem}{Theorem}
13 %\newtheorem{acknowledgement}[theorem]{Acknowledgement}
14 %\newtheorem{algorithm}[theorem]{Algorithm}
15 %\newtheorem{axiom}[theorem]{Axiom}
16 %\newtheorem{case}[theorem]{Case}
17 %\newtheorem{claim}[theorem]{Claim}
18 \newtheorem{assumption}{Assumption}
19 %\newtheorem{conclusion}[theorem]{Conclusion}
20 %\newtheorem{condition}[theorem]{Condition}
21 %\newtheorem{conjecture}[theorem]{Conjecture}
22 %\newtheorem{corollary}[theorem]{Corollary}
23 %\newtheorem{criterion}[theorem]{Criterion}
24 %\newtheorem{definition}[theorem]{Definition}
25 %\newtheorem{example}[theorem]{Example}
26 %\newtheorem{exercise}[theorem]{Exercise}
27 %\newtheorem{lemma}[theorem]{Lemma}
28 %\newtheorem{notation}[theorem]{Notation}
29 %\newtheorem{problem}[theorem]{Problem}
30 %\newtheorem{proposition}[theorem]{Proposition}
31 %\newtheorem{remark}[theorem]{Remark}
32 %\newtheorem{solution}[theorem]{Solution}
33 %\newtheorem{summary}[theorem]{Summary}
34 %\newenvironment{proof}[1][Proof]{\noindent\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
38 %\documentclass[]{llncs}
41 %\usepackage{graphicx}
43 % %\usepackage{amsthm}
46 \usepackage{algorithm}
47 \usepackage{algorithmic}
48 % %\newtheorem{definition}{Definition}
51 %\renewcommand{\baselinestretch}{1}
52 \title{Distributed Average Consensus in Large Asynchronous Sensor Networks}
54 \author{Jacques M. Bahi, Arnaud Giersh, Abdallah Makhoul and Ahmed Mostefaoui}
55 \institute{Computer Science Laboratory, University of Franche-Comt\'e (LIFC) \at
56 Rue Engel-Gros, BP 527\\
57 90016 Belfort Cedex, France\\
58 \email{\{firstname.lastname\}@univ-fcomte.fr}
64 \abstract { One important issue in sensor networks is parameters
65 estimation based on nodes measurements. Several approaches have been
66 proposed in the literature (centralized and distributed
67 ones). Because of the particular noisy environment, usually observed
68 in sensor networks, centralized approaches are not efficient and
69 present several drawbacks (important energy consumption, routing
70 information maintaining, etc.). In distributed approaches however,
71 nodes exchange data with their neighbours and update their own data
72 accordingly until reaching convergence to the right parameters
73 estimate. These approaches, although provide some robustness against
74 nodes failure, does not address important issues as communication
75 delay tolerance and asynchronism (i.e., they require that nodes
76 remain synchronous in communication and processing). In this
77 chapter, we tackle these issues by proposing a totally asynchronous
78 scheme that is communication delay tolerant. The extensive
79 simulations series we conducted have showed the effectiveness of our
82 %------------------------------------------------------------------------------------------------------------------------
83 \section{Introduction}
84 Recent years have witnessed significant advances in wireless sensor
85 networks which emerge as one of the most promising technologies for
86 the 21$^{st}$ century~\cite{12}. In fact, they present huge potential
87 in several domains ranging from health care applications to military
88 applications. In general, the primary objective of a wireless sensor
89 network is to collect data from the monitored area and to transmit it
90 to a base station (sink) for processing. Many applications envisioned
91 for sensor networks consist of low–power and low–cost nodes. For
92 instance, applications such as data fusion and distributed
93 coordination require distributed function computation/parameter
94 estimation under topology changes and power constraints. Distributed
95 average consensus, in ad hoc networks, is an important issue in
96 distributed agreement and synchronization problems~\cite{22} and is
97 also a central topic for load balancing (with divisible tasks) in
98 parallel computers~\cite{23}. More recently, it has also found
99 applications in distributed coordination of mobile autonomous
100 agents~\cite{24,20} and distributed data fusion in sensor
101 networks~\cite{21,25}. In this chapter, we focus on a particular class
102 of iterative algorithms based on information diffusion for average
103 consensus, widely used in the applications cited above. Each node
104 broadcasts its data to its neighbours and updates its estimation
105 according to a weighted sum of the received data until the algorithm
108 To illustrate the average consensus problem, let us consider the
109 example of petrol tanks. We suppose that in a oil station we have
110 large number of tanks related to each other in mechanical and sensor
111 networks. The role of sensors is to dectect the level of each oil
112 tank. The objective of this application is to keep the level of oil
113 on all tanks the same. When a sensor node detects some changes in its
114 level, it launchs an average consenus process to calculate the
115 average level of all tranks and then thanks to the mechanical network
116 an oil transfer operation is done to regulate the level. The average
117 consensus process is used to compute the average level, each sensor
118 node exchanges its information with its neighbors by iterative manner
119 until the convergence to the average consensus.
121 %\subsection {Motivations} The above data processing scheme is usually
122 %known as {\it centralized data fusion}. In this scheme, each sensor
123 %sends its data either directly, if it is located in the immediate
124 %neighbourhood of the sink, or by multi hops relays to the data fusion
125 %center via wireless communications. Besides the important cost in term
126 %of energy resources consumption due mainly to wireless communications
127 %(i.e., sensors that are located very far away from the base station,
128 %requires an important amount of energy to send/receive data to/from
129 %the sink), this scheme does not hold good robustness against
130 %communication loss neither against nodes failures. Furthermore, it
131 %requires that each node maintains rooting information to reach the
132 %sink. This is particularly challenging and resources consuming in case
133 %where network topology is constantly changing due either to nodes
134 %failures or communications unreliability or nodes mobility.
136 %Distributed approaches were proposed as interesting alternates based
137 %on {\it in-network} processing which may, in many cases, significantly
138 %decrease the energy consumed. In fact, in such approaches, nodes do
139 %not need to hold global knowledge about the current network topology
140 %since each node communicates only with its immediate neighbours. The
141 %unknown parameter estimate is then successively carried out through
142 %local computation from the exchanged data. The advantages of such
143 %approaches are numerous: (a) no central data fusion base station is
144 %required as every node holds the estimate of the unknown parameter;
145 %(b) multi-hop communications are avoided (only direct communications
146 %between neighbours are needed) and consequently maintaining rooting
147 %data is not needed any more; (c) better behaviour is observed in front
148 %of communication unreliability; (d) Network scalability is better
149 %supported than in centralized approach due mainly to direct
150 %communications between neighbours; etc.
152 To calculate the average consensus, many distributed approaches have
153 been proposed. On the other hand, these existing approaches present
154 some insufficiencies (see next section). For instance, the flooding
155 approach requires that each node holds a relatively important storage
156 space. Other approaches make the unpractical assumption of
157 communication synchronization between sensors~\cite{2,20} and do not
158 tolerate communication delays neither nodes failures. These weaknesses
159 remain very restrictive in sensor network environment where on one
160 hand nodes are prone to frequent failures as they are driven by
161 batteries and on the other hand communications are almost unreliable
162 and prone to delays. Moreover, these two limitative features lead, in
163 addition to nodes mobility, to dynamically changing network
166 %\subsection {Contributions}
168 In order to overcome the above mentioned weaknesses, we propose and
169 investigate in this chapter a novel approach for data fusion in sensor
170 networks. The key idea behind is to develop a consensus algorithm that
171 allows all nodes of the sensor network to track the average of their
172 previous measurements~\cite{21,2,20,5,9,10,16,17,18}. More
173 specifically, our proposition is based on an {\it in-network
174 asynchronous iterative algorithm}, run by each node and in which
175 nodes communicate with only their immediate neighbours.
177 In this context, let us discuss the primary contributions of this chapter:
181 \item Our approach does not require any synchronization between nodes
182 as it is basically asynchronous. In other words, each node
183 communicates its data to its instantaneous neighbours at its own
184 "rhythm" i.e., no delays between nodes are observed in our
185 approach. This is particularly important because in the synchronous
186 schemes, as the one reported in~\cite{2}, any delay between two
187 nodes in the network will result in a global delay over the whole
188 network since all the nodes are synchronous. This is particularly
189 limitative in heterogeneous sensor networks where nodes have
190 different processing speeds.
192 \item As a consequence of its asynchronism, our proposed approach
193 totally tolerates communication delays. This feature is of an
194 important matter because sensor networks, as it is commonly known,
195 are prone to environmental perturbations~\cite{1} when communication
196 delays occur more frequently.
198 \item The proposed distributed algorithm, as proven theoretical and
199 validated experimentally, supports dynamic topologies and guarantees
200 that each sensor node will converge to the average consensus.
205 However, as for any iterative approach, our approach could, under
206 certain environmental conditions, consume more network resources,
207 mainly communications, than other centralized approaches, specifically
208 in "perfect environment" where nodes and communications are totally
209 reliable and the network topology is fixed. Nevertheless, we note
210 here that our concern is more focused on {\it "noisy environment"} in
211 which communication unreliability and nodes failures are usual.
213 %-----------------------------------------------------------------------------------------------------------------------
215 \section{Overview of Averaging Problem in Sensor Networks}
217 The first and the simplest approach for distributed average estimation
218 in sensor networks is called {\it flooding} approach~\cite{2}. In this
219 approach, each sensor node broadcasts all its stored and received data
220 to its neighbours. After a while, each node will hold all the data of
221 the network and acts as a fusion center to compute the estimate of the
222 unknown parameter. This technique has however several
223 disadvantages~\cite{2}. First, it results in huge amount of exchanged
224 duplicate messages, which represents a real limitation in environments
225 like sensor networks. Second, flooding requires that each node stores
226 at least one message per node (in order to compute the average). This
227 could lead to an important storage memory requirement in case of a
228 large sensor network with the associated operations (reads and
229 writes). Finally, it is obvious that those requirements will consume
230 much resources leading to an important decrease of the whole network
233 Alternatively, in~\cite{3} the authors proposed a scalable sensor
234 fusion scenario that performs fusion of sensor measurements combined
235 with local Kalman filtering. They developed a distributed algorithm
236 that allows the sensor nodes to compute the average of all of their
237 measurements. It is worthy to note that many other sensor data fusion
238 approaches are based on Kalman filters and mobile
239 agents~\cite{4,5,11,16,17}.
241 An iterative method for distributed data fusion in sensor networks
242 based on the calculation of an average consensus~\footnote{In the rest
243 of the paper, the terms "average consensus" and "parameter
244 estimation" are used to denote the same mechanism of finding an
245 estimate of the unknown parameter average.} has been proposed
246 in~\cite{2}. The authors consider that every node takes a noisy
247 measurement of the unknown parameter. Each node broadcasts its data to
248 its neighbours and updates its estimation according to a weighted sum
249 of the received data. In this scheme all the communications are direct
252 Although the above mentioned works and other existing data fusion
253 scenarios guarantee some level of robustness to nodes failures and
254 dynamic topology changes~\cite{2,3,4,10,20}, they either put some
255 unpractical assumptions like nodes synchronization or do not support
256 practical issues as the communication delays.
258 To the best of our knowledge, the above issues which are extremely
259 important, especially in noisy environments, are not taken into
260 account in previous data fusion approaches. In this chapter, we
261 present an asynchronous data fusion scheme, particularly tailored to
262 perturbed sensor networks. It focuses on a distributed iterative
263 algorithm for calculating averages over asynchronous sensor
264 networks. The sensor nodes exchange and update their data by the mean
265 of a weighted sum in order to achieve the average consensus. The
266 suggested algorithm does not rely on synchronization between the nodes
267 nor does it require any knowledge of the global topology. To round up,
268 the convergence of the proposed algorithm is proved in a general
269 asynchronous environment.
271 %-----------------------------------------------------------------------------------------------------------------
273 \section{Asynchronous Distributed Consensus with Messages Loss}
275 \subsection{Problem Formulation}
277 A sensor network is modelled as a connected undirected graph $G = (V,
278 E)$. The set of nodes is denoted by $V$ (the set of vertices), and the
279 links between nodes by $E$ (the set of edges). The nodes are labelled
280 $i = 1,2, \ldots, n$, and a link between two nodes $i$ and $j$ is
281 denoted by $(i, j)$. The dynamic topology changes are represented by
282 the time varying graph $G(t) = (V, E(t))$, where $E(t)$ is the set of
283 active edges at time $t$. The set of neighbours of node $i$ at time
284 $t$ is denoted by $N_{i}(t) = \{j\in V \mid (i,j) \in E(t)\}$, and the
285 degree (number of neighbours) of node $i$ at time $t$ by $\eta_{i}(t)
288 Each node takes initial measurement $z_{i}$. For sake of simplicity
289 let us suppose that $z_{i}\in\mathbb{R}$. Then, $z$ will refer to the
290 vector whose $i$th component is $z_{i}$ in case we are concerned with
291 several parameters. Each node on the network also maintains a dynamic
292 state $x_{i}(t)\in\mathbb{R}$ which is initially set to $x_{i}(0) =
295 Intuitively each node's state $x_{i}(t)$ is its current estimate of
296 the average value $\sum_{i=1}^{n} z_{i} / n$. The goal of the
297 averaging algorithm, is to let all the states $x_{i}(t)$ go to the
298 average $\sum_{i=1}^{n} z_{i} / n$, as $t \rightarrow \infty$. This
299 will be done through data exchange between neighbouring nodes where
300 each node at every time iteration $t$ performs weighted sum of the
301 received data as follows~\cite{20,2}:
304 x_{i}(t+1) = x_{i}(t) - \sum_{j\in{N}_{i}}\alpha_{ij}(t)(x_{i}(t) - x_{j}(t)) , i = 1, \ldots, n.
308 Where $\alpha_{ij}(t)$ is the weight on $x_{j}(t)$ at node $i$, and
309 $\alpha_{ij}(t) = 0$ for $j \not\in N_{i}(t)$.
311 In order to handle communication delays, we consider that at time $t$
312 a node $i$ gets the state of its neighbour $j$ at time $d_{j}^{i}(t)$,
313 where $0\leq d_{j}^{i}(t)\leq t$
315 $d_{j}^{i}(t)$ represents the transmission delay between nodes $i$ and
316 $j$. Therefore, let us denote
317 $x_{j}^{i}(t)=x_{j}(d_{j}^{i}(t))\in\mathbb{R}$ the state of node $j$
318 at time $d_{j}^{i}(t),$ received at time $t$ by node $i$. Then, we
319 defined the extended neighbourhood of node $i$ at time $t$ as the set:
322 \overline{N}_{i}(t)=\left\{j \mid \text{ }\exists \ d_{j}^{i}(t)\in\left\{
323 t-B+1,...,t\right\} ,\text{such that }j\in N_{i}(d_{j}^{i}(t))\right\};
326 note that $ N_{i}(t) \subset \overline{N}_{i}(t)$.
329 The problem, as for any distributed algorithmic approach, is how and
330 under which conditions, will we ensure convergence of the proposed
331 algorithm? In other terms, are we sure that all the node's $x_{i}$
332 will converge to the right estimate of the unknown parameter average
333 value? Also, how can we choose the parameters $\alpha_{ij}(t)$ so to
334 improve the convergence speed and the quality of the derived estimate?
335 Hereafter we present and analyse our proposal. We used the notations
336 reported in Table~\ref{notations}
342 \begin{tabular}{|c|c|}
345 \textbf Notation&Description\\
347 \textbf{$G(t)$}&the time varying graph\\
349 \textbf{$N_{i}(t)$}&the set of neighbors of node $i$ at time $t$\\
351 \textbf{$z_{i}$}&the initial measurement of node $i$\\
353 \textbf{$x_{i}(t)$}&the dynamic state of node $i$\\
355 \textbf{$d_{j}^{i}(t)$}&the transmission delay between nodes $i$ and $j$\\
357 \textbf{$x_{j}^{i}(t)=x_{j}(d_{j}^{i}(t))$}&the state of node $j$ at time $t-d_{j}^{i}(t)$\\
359 \textbf{$\overline{N}_{i}(t)$}&the extended neighborhood of $i$ at time $t$\\
361 \textbf{$s_{ij}(t)$}& the data sent by $i$ to $j$ at time $t$\\
363 \textbf{$r_{ji}(t)$}&the data received by $i$ from $j$ at time $t$\\
371 %--------------------------------------------------------------------------------------------------------------------------------
372 \subsection{Asynchronous scheme}
374 Our algorithm to compute the average consensus over the network is
375 based on information diffusion i.e., each node takes a measurement and
376 then cooperates with its neighbours in a diffusion manner to estimate
377 the average of all the collected information. It is inspired from the
378 work of Bertsekas and Tsitsiklis~\cite[section~7.4]{6} on load
379 balancing and extends it to cope with dynamic topologies and messages
380 loss and delays. Algorithm~\ref{general} presents the main steps of
381 our proposed algorithm.
385 \caption{The General Algorithm.}
386 \begin{algorithmic}[1]
388 \STATE Each node maintains an instantaneous state $x_{i}(t) \in
389 \mathbb{R}$, and at $t=0$ (after all nodes have taken
390 the measurement), each node initializes its state as $x_{i}(0) =
391 z_{i}$. \STATE At every step $t$ each node $i$:
393 \item compares its state to the states of its neighbours;
394 \item chooses and computes $s_{ij}(t)$. They have to be chosen
395 carefully in order to ensure the convergence of the algorithm;
396 \item diffuses its information;
397 \item receives the information sent by its neighbours $r_{ji}(t)$;
398 \item updates its state with a combination of its own state and the states at its
399 instantaneous and extended neighbours ($\overline{N}_{i}(t)$) as follows:
401 x_{i}(t+1)=x_{i}(t)-\sum_{j\in N_{i}(t)}s_{ij}(t)+\sum_{j\in\overline
402 {N}_{i}(t)}r_{ji}(t).
410 \subsection{Theoretical Analysis (Convergence)}
412 We now introduce three assumptions that ensure the convergence of our algorithm.
415 \label{assump:ConnectedGraph}
416 $\text{There exists}\ B\in\mathbb{N}$ such that $\forall
417 t\geqslant0,$\\ $t-B<d_{j}^{i}(t)\leq t$ and the union of communication graphs
418 $\bigcup_{\tau = t}^{t+B-1}G(\tau)$ is a connected graph.
421 This assumption, known as jointly connected condition~\cite{2,7},
422 implies that each node $i$ is connected to a node $j$ within any time
423 interval of length $B$ and that the delay between two nodes cannot
424 exceeds $B$. Recall that, a graph is connected if for any two vertices
425 $i$ and $j$ there exists a sequence of edges $(i,\ k_{1}), (k_{1},
426 k_{2}), \ldots, (k_{l-1},\ k_{l}),\\ (k_{l},\ j)$.
428 In Figure~\ref{fig:jointly} we show an example of jointly connected
429 graphs, we notice that at $t = 1$ the graph $G_{1}$ is not connected;
430 the same case for $G_{2}$ at $t = 2$; while the union $G$ of $G_{1}$
431 and $G_{2}$ is a connected graph.
435 \includegraphics[scale=.55]{jointly.eps}
437 \caption{Example of jointly connected graphs}
443 \label{Inf Assumption} $ \text{There exists}\ \alpha>0,\forall
444 t\geqslant0, \\ \forall i\in N,\forall j\in N_{i}(t)$, such that
445 $\alpha(x_{i}(t)-x_{j}^{i}(t)) \leq s_{ij}(t).$ \\ $\ \left(s_{ij}(t)=0
446 \ \text{if} \ (x_{i}(t) \leq x_{j}^{i}(t)) \ \text{for all} \ j
447 \in N_{i}(t)\right) $.
450 The second assumption postulates that when a node $i$ detects a
451 difference between its state and the states of its neighbours, it
452 therefore computes non negligible $s_{ij}$ to all nodes $j$ where
453 $(x_{i}(t) > x_{j}^{i}(t))$.
458 x_{i}(t)-\sum_{k\in N_{i}(t)}s_{ik}(t)\geq x_{j}^{i}(t)+s_{ij}(t)
462 The third assumption prohibits node $i$ to compute very large $s_{ij}$
463 which creates a ping-pong state. Recall that, the ping-pong state is
464 established when two nodes keep sending data to each other back and
465 forth, without ever reaching equilibrium. Note that these two
466 assumptions are similar to assumption 4.2 introduced
467 in~\cite[section~7.4]{6}.
471 if the assumptions \ref{assump:ConnectedGraph}, \ref{Inf Assumption}
472 and \ref{PingPong} are satisfied, Algorithm~\ref{general} guarantees that
475 \lim_{t\rightarrow\infty}x_{i}(t)=\frac{1}{n} {\displaystyle\sum\limits_{i=1}^{n}} x_{i}(0)
478 i.e., all node states converge to the average of the initial measurements of the network.
481 %------------------------------------------------------------preuve
485 Let $m(t)=\min_{i}\min_{t-B<\tau\leq t}x_{i}(\tau).$ Note that $x_{j}^{i}%
486 (\tau)\geq m(t),$ $\forall i,j,t.$\newline Lemma \ref{lemma:1} and
487 \ref{lemma:2} below can be proven similarly to the lemma of pages 521 and 522
490 Denote by $v_{ij}(t)=\sum\limits_{s=0}^{t-1}\left( s_{ij}(s)-r_{ij}%
491 (s)\right) ,$ the data sent by $i$ and not yet received by
492 $j$ at time $t.$ We suppose that $v_{ij}(0)=0.$ Then by data
493 conservation, we obtain%
495 \sum_{i=1}^{n}\left( x_{i}(t)+\sum_{j\in N_{i}(t)}v_{ij}(t)\right)
496 =\sum_{i=1}^{n}x_{i}(0),\qquad\forall t\geqslant0\label{conservation}%
499 >From assumption~\ref{assump:ConnectedGraph} we can conclude that the
500 data $v_{ij}(t)$ in the network before time $t$ consists in
501 data sent in the interval time $\left\{ t-B+1,...,t-1\right\} ,$ so
502 $v_{ij}(t)\leq\sum_{\tau=t-B+1}^{t-1}s_{ij}(t),$ $\forall node
503 i,\forall j\in N_{i}(t).$
506 \label{lemma:1}The sequence $m(t)$ is monotone, nondecreasing and converges
507 and $\forall i,\forall s\geq0,$%
509 x_{i}(t+s)\geq m(t)+\left( \frac{1}{n}\right) ^{t_{1}-t_{0}}(x_{i}(t)-m(t))
514 Let $i\in V,t_{0}\in\mathbb{N},$ and $t\geq t_{0},$ $j\in V,$ we say that the
515 event $E_{j}(t)$ occurs if there exists $j\in\overline{N}_{i}(t)$ such
518 x_{j}^{i}(t)<m(t_{0})+\frac{\alpha}{2n^{t-t_{0}}}\left( x_{i}(t_{0}%
519 )-m(t_{0})\right) \label{Event E_1}%
523 s_{ij}(t)\geq\alpha\left( x_{i}(t)-x_{j}^{i}(t)\right) ,\label{Event E_2}%
525 where $\alpha$ is defined in assumption \ref{Inf Assumption}, and $V$ is the set of all nodes.
528 \label{lemma:2}Let $t_{1}\geq t_{0},$ if $E_{j}(t_{1})$ occurs, then
529 $E_{j}(\tau)$ doesn't occur for any $\tau\geq t_{1}+2B$.
533 \label{lemma:3}$\forall i\in V,\forall t_{0}\in\mathbb{N},\forall
534 j\in\overline{N}_{i}(t),$%
536 t\geq t_{0}+3nB\Rightarrow x_{j}(t)\geq m(t_{0})+\eta\left( \frac{1}%
537 {n}\right) ^{t-t_{0}}(x_{i}(t_{0})-m(t_{0})).
539 where $\eta=\frac{\alpha}{2}\left( \frac{1}{n}\right) ^{B}.$
543 Let us fix $i$ and $t_{0}.$ Let us consider $t_{1},...,t_{n}$ such that
544 $t_{k-1}+2B\leq t_{k}\leq t_{k-1}+3B.$ Lemma \ref{lemma:2} implies that if
545 $k\neq l,$ then $E_{j}(t_{k})$ and $E_{j}(t_{l})$ doesn't occur together.
546 Hence, there exists $t_{k}$ for which (\ref{Event E_1}) is not satisfied for
547 all $d_{j}^{i}(t_{k})\in\left\{ t_{k}-B+1,...,t_{k}\right\} ,$ and $j\in
548 N_{i}(d_{j}^{i}(t_{k})).$
550 Let $j^{\ast}\in N_{i}(d_{j}^{i}%
551 (t_{k}))$ such that $x_{j^{\ast}}^{i}(t_{k})\leq x_{j}^{i}(t_{k}), \forall
552 j\in N_{i}(d_{j}^{i}(t_{k})).$ Since (\ref{Event E_1}) is not satisfied for
553 $j=j^{\ast},$ we have%
558 x_{j}^{i}(t_{k})\geq & x_{j^{\ast}}^{i}(t_{k}) \\
559 x_{j^{\ast}}^{i}(t_{k}) \geq & m(t_{0})+\frac{\alpha}%
560 {2}\left( \frac{1}{n}\right) ^{t_{k}-t_{0}}\left( x_{i}(t_{0}%
561 )-m(t_{0})\right), \text{ }\forall j\in N_{i}(d_{j}^{i}(t_{k})).
566 For $t\geq t_{0}+3nB,$ we have $t\geq t_{k}\geq d_{j}^{i}(t_{k}).$ Lemma
567 \ref{lemma:1} gives, $\forall j\in N_{i}(d_{j}^{i}(t_{k}))$%
572 x_{j}(t)\geq & m(d_{j}^{i}(t_{k}))+\left(\frac{1}{n}\right) ^{t-d_{j}%
573 ^{i}(t_{k})}(x_{j}(d_{j}^{i}(t_{k}))-m(d_{j}^{i}(t_{k})))\\
574 \geq & m(t_{0})+\frac{\alpha}{2}\left( \frac{1}{n}\right) ^{B}\left(
575 \frac{1}{n}\right) ^{t-t_{0}}\left( x_{i}(t_{0})-m(t_{0})\right) .
581 We say that a sensor $j$ is $l$-connected to a sensor $i$ if it is logically
582 connected to $i$ by $l$ communication graphs, i.e. if there exists $r_{k}%
583 ^{i}(t_{k})\in\left\{ t_{k}-B+1,...,t_{k}\right\} ,$ where $k\in\left\{
584 i_{1},...,i_{l}\right\} $, such that $i=i_{1}\in N_{i_{2}}(r_{i_{1}}%
585 ^{i_{2}}(t_{1})),i_{2}\in N_{i_{3}}(r_{i_{2}}^{i_{3}}(t_{2})),...,\\ i_{l}\in
586 N_{j}(r_{l}^{j}(t_{l})).$
590 \label{lemma:4}If sensor $j$ is $l$-connected to sensor $i$ then%
592 \forall t\geq t_{0}+3nlB,\text{ }x_{j}(t)\geq m(t_{0})+\left( \eta\right)
593 ^{l}(\frac{1}{n})^{(t-t_{0})^{l}}\left( x_{i}(t_{0})-m(t_{0})\right) .
599 By induction. Suppose that the lemma is true for $t_{0}+3nlB$ then if $j$ is
600 $l$-connected to $j$, we have%
603 x_{l}(t_{0}+3nlB)\geq m(t_{0})+\left( \eta\right) ^{l}\left( (\frac{1}%
604 {n})^{(3nlB)}\right) ^{l}\left( x_{i}(t_{0})-m(t_{0})\right) .
607 Consider a sensor $k$ connected to $j$ ($k$ is
608 $\left(l+1\right)$-connected to $i$), Lemma \ref{lemma:3} and the
609 above inequality give (replacing $t_{0}$ by $t_{0}+3nlB),$
615 x_{k}(t)\geq & \\ m(t_{0}+3nlB)+\eta(\frac{1}{n})^{^{t-t_{0}-3nlB}}\left(
616 \left( \eta\right) ^{l}\left( (\frac{1}{n})^{(3nlB)}\right) ^{l}\left(
617 x_{i}(t_{0})-m(t_{0})\right) \right) \\
618 \geq & \\ m(t_{0})+\left(\eta\right) ^{l+1}\left((\frac{1}{n})^{(t-t_{0}%
619 )}\right) ^{l+1}\left(x_{i}(t_{0})-m(t_{0})\right).
625 [Proof of Theorem 1] Consider a sensor $i$ and a time $t_{0}.$ Assumption
626 \ref{assump:ConnectedGraph} implies that sensor $i$ is $B$-connected to any
627 sensor $j.$ Lemma \ref{lemma:4} gives: $\forall t\in\left[ t_{0}%
628 +3nMB,t_{0}+3nMB+B\right] ,$ $\forall j\in V,$%
630 x_{j}(t_{0}+3nMB+B)\geq m(t_{0})+\delta\left( x_{i}(t_{0})-m(t_{0})\right) ,
632 where $\delta>0.$ Thus,%
634 m(t_{0}+3nMB+B)\geq m(t_{0})+\delta\left( \max_{i}x_{i}(t_{0})-m(t_{0}%
637 Note that $\lim_{t_{0}\rightarrow\infty}\max_{i}x_{i}(t_{0})-m(t_{0})=0$
638 (otherwise \\$\lim_{t_{0}\rightarrow\infty}m(t_{0})=+\infty$). On the other
639 hand, as $\lim_{t\rightarrow\infty}m(t)=c$ and as $m(t)\leq x_{j}(t)\leq
640 \max_{i}x_{i}(t),$ we deduce that $\forall j\in V,$ $\lim_{t\rightarrow\infty
641 }x_{j}(t)=c,$ which implies that $\lim_{t\rightarrow\infty}s_{ij}(t)=0$.
642 Thanks to assumption 1, we deduce that $\lim_{t\rightarrow\infty}v_{ij}(t)=0,$
643 and thanks to (\ref{conservation}), we deduce that $nc=\lim_{t\rightarrow
644 \infty}x_{i}(t)=\sum_{i=1}^{n}x_{i}(0),$i.e. $c=\sum_{i=1}^{n}x_{i}(0)/n,$
645 which yields to $\lim_{t\rightarrow\infty}x_{i}(t)=\frac{1}{n}%
646 %TCIMACRO{\dsum \limits_{i=1}^{n}}%
648 {\displaystyle\sum\limits_{i=1}^{n}}
650 x_{i}(0)$ proving Theorem 1.
653 %----------------------------------------------------------preuve
656 \subsection{Practical Issues}
658 We now discuss some practical aspects related to the implementation of
659 Algorithm~\ref{general}. The main two points are how to choose
660 $s_{ij}(t)$ and how to overcome the loss of messages?
662 Each node updates its state following equation (\ref{x_(t+1)}). This
663 is achieved, by updating each sensors $s_{ij}(t)$ through time. For
664 sake of simplicity, the value of $s_{ij}(t)$ is chosen to be computed
665 by the weighted difference between the states of nodes $i$ and $j$ as
671 \alpha_{ij}(t) (x_{i}(t) - x_{j}^{i}(t)) & \qquad \text{if} \quad x_{i}(t) > x_{j}^{i}(t) \ ,\\
672 0 & \qquad \text{otherwise}.\\
677 The choice of $s_{ij}(t)$ is then deduced from the proper choice of
678 the weights $\alpha_{ij}(t)$. Hence, $\alpha_{ij}(t)$ must be chosen
679 such that the states of all the nodes converge to the average
680 $\sum_{i=1}^{n} z_{i} / n$, i.e., assumptions \ref{Inf Assumption} and
681 \ref{PingPong} must be satisfied.
683 Denote by $j^{\ast}$ the sensor node satisfying
684 $x_{j^{\ast}}^{i}=\min_{k\in N_{i}(t)} x_{k}^{i}(t)$ (note that
685 $j^{\ast}$ depends on $i$ and time $t$)$.$ The values of
686 $\alpha_{ij}(t)$ must be selected so that to avoid the ping pong
687 condition presented in assumption~\ref{PingPong}.
689 This is equivalent to choose $\alpha_{ij}(t)$ so that $\forall
690 t\geqslant 0,\forall i\in N,$ and $j\neq
691 j^{\ast}\in\overline{N}_{i}(t)$ satisfying $x_{i}(t)>x_{j}^{i}(t),$
694 0\leq\alpha_{ij}(t)\leq\frac{1}{2}\left( 1-\frac{\sum_{j\neq i}\alpha
695 _{ik}(t)(x_{i}(t)-x_{i}^{k}(t))}{(x_{i}(t)-x_{i}^{j}(t))}\right)
699 The weights $\alpha_{ij}(t)$ must also be chosen in order to respect
700 assumption~\ref{Inf Assumption}. This assumption can be carried out by
701 fixing a constant $\beta\in\left[ 0,1\right] $ and choosing
707 \sum_{k\neq j^{\ast}\in N_{i}(t)}\alpha_{ik}(t)(x_{i}(t)-x_{k}^{i}%
708 (t))\leq\beta(x_{i}(t)-x_{j^{\ast}}^{i}(t)),\\
709 \alpha_{ij^{\ast}}(t)=\frac{1}{2}\left( 1-\frac{\sum_{k\neq j^{\ast}}%
710 \alpha_{ik}(t)(x_{i}(t)-x_{k}^{i}(t))}{(x_{i}(t)-x_{j^{\ast}}^{i}(t))}\right)
714 Indeed, from (\ref{h6}) we deduce%
716 \alpha_{ij^{\ast}}(t)\geq\frac{(x_{i}(t)-x_{j^{\ast}}^{i}(t))-\beta
717 (x_{i}(t)-x_{j^{\ast}}^{i}(t))}{2(x_{i}(t)-x_{j^{\ast}}^{i}(t))}=\frac
720 Hence, $\forall i,j^{\ast},t$ such that $j^{\ast}\in\overline{N}_{i}(t)$
721 and $x_{j^{\ast}}^{i}(t)=\min_{k\in N_{i}(t)}x_{k}^{i}(t),$%
723 s_{ij^{\ast}}(t)=\alpha_{ij^{\ast}}(t)\left( x_{i}(t)-x_{j^{\ast}}%
724 ^{i}(t)\right) \geq\alpha\left( x_{i}(t)-x_{j^{\ast}}^{i}(t)\right).
727 The first inequation of (\ref{h6}) can be written as $\sum_{k\neq
729 V_{i}(t)}s_{ik}(t)\leq\beta(x_{i}(t)-x_{j^{\ast}}^{i}(t)),$ this
730 means that the totality of data sent to the neighbours of $i$ (except
731 $j^{\ast }$) doesn't exceed a portion $\beta$ of
732 $(x_{i}(t)-x_{j^{\ast}}^{i}(t)).$
734 Equations (\ref{h5_2bis}) and (\ref{h6}) are derived from the
735 assumptions \ref{Inf Assumption} and \ref{PingPong}. Therefore the
736 choice of the weights $\alpha_{ij}$ must take into consideration these
739 First let define the deviation $\Delta_{i}^{j}(t)$ of node $i$ as:
742 \Delta_{i}^{j}(t) = \left\{
744 x_{i}(t) - x_{j}^{i}(t) & \qquad \text{if} j\in N_{i}(t) \ \text{and} \ x_{i}(t) > x_{j}^{i}(t) \ ,\\
745 0 & \qquad \text{otherwise.}\\
750 Algorithm~\ref{WeightUpdate} presents our method for temporally
751 updating the averaging weights. Node $i$ computes the difference
752 between its current state and current states of its neighbours. The
753 positive deviations ($\Delta_{i}^{j} > 0$) are then stored in the
754 array $Delta_{i}$, in a decreasing order. Then, it sets the weight
755 $\alpha_{ij}$ to $1/(\eta_{i}(t) + 1)$, where $\eta_{i}(t)$ is the
756 current number of its neighbours, starting by its neighbours nodes $j$
757 whose have the larger deviations while respecting
758 assumption~\ref{PingPong}.
762 \caption{Temporally updating weights of node $i$.}
763 \begin{algorithmic}[1]
765 \FOR {$j \gets 1$ to $n$}
767 \STATE $s_{ij} \gets 0$
768 \STATE $\alpha_{ij} \gets 0$
773 \STATE find $\ell$ such that $\Delta_{i}^{\ell} = Delta_{i}[k]$
774 \STATE $\alpha_{i\ell} = 1/(\eta_{i} + 1)$
775 \STATE $s_{i\ell} = \alpha_{i\ell} \times \Delta_{i}^{\ell}$
777 \STATE $Sum \gets Sum + s_{il}$
779 \STATE find $\ell$ such that $\Delta_{i}^{\ell} = Delta_{i}[k]$
780 \STATE $\alpha_{i\ell} \gets 1/(\eta_{i} + 1)$
781 \STATE $s_{i\ell} \gets \alpha_{i\ell} \times \Delta_{i}^{\ell}$
782 \UNTIL{ $NOT$ ($(x_{i}- Sum \geq x_{\ell}^{i}+s_{i\ell})$ $AND$ $(k < n)$)}
788 In order to cope with the problem of message loss, we adopted the
789 following strategy: instead of sending $s_{ij}(t)$ from node $i$ to
790 node $j$, it is the sum $\Sigma_{s_{ij}}(t) = \sum_{0\leq \tau\leq t}
791 s_{ij}(\tau)$ that is sent. Symmetrically the receivers maintains the
792 sum of the received data $\Sigma_{r_{ji}}(t) = \sum_{0\leq \tau\leq t}
793 r_{ji}(\tau)$. Upon receiving, at a time $t$, a message from node $i$,
794 a node $j$ can now recover all the data that was sent before time
795 $d_i^j(t)$. It has only to calculate the difference between the
796 received $\Sigma_{s_{ij}}(d_i^j(t))$ and the locally stored
797 $\Sigma_{r_{ji}}(t)$.
799 To conclude, the state messages exchanged during the execution of the
800 algorithm are composed of two scalar values : the current state of the
801 node, $x_i(t)$, and the sum of the sent data $\Sigma_{s_{ij}}(t)$.
803 \subsection{Illustrative Example}
804 To illustrate the behaviour of our proposed approach, les us consider
805 the example presented in Figure~\ref{fig:example}. It consists in a
806 network of four nodes. The initial measurement of each node $i$ is
807 known by $z_{i}$ and the initial state $x_{i}(0) = z_{i}$.
811 \includegraphics[scale=.55]{exampleFusion}
813 \caption{An example of a sensor network composed of four nodes with their initial measurements. }
817 Following the second step of Algorithm~\ref{general}, each node computes the weights $\alpha_{ij}$ for its neighbours. This is done by using Algorithm~\ref{WeightUpdate}.
819 Let us focus on the case of $node_{4}$ for instance. We notice that it
820 has three neighbours and the high deviation $\Delta$ corresponds to
821 $node_{3}$. Therefore, it computes $ \alpha_{43}(0) =
822 \frac{1}{\eta_{4} + 1} = \frac{1}{4} $ first, such that $\eta_{4}$ is
823 the number of its neighbours. Then, $s_{43}(0) = \frac{1}{4} (x_{4}(0)
824 - x_{3}(0)) = 0.175$. For the two reminder neighbours $node_{1}$ and
825 $node_{2}$, $node_{4}$ computes $\alpha_{42}(0)$ first for the reason
826 that $\Delta_{4}^{2}$ is higher than $\Delta_{4}^{1}$. We note that
827 for $node_{2}$ the Assumption~\ref{PingPong} (ping pong condition) is
828 satisfied while it is not the case for $node_{1}$ which leads to $
829 \alpha_{41}(0) = 0 $.
831 All the nodes compute their weights and then diffuse their information
832 to their neighbours to update their states following
833 Equation~(\ref{x_(t+1)}). For the above example after the first step
837 \item[] $x_{1}(1) = 0.7$
838 \item[] $x_{2}(1) = 0.5 + 0.1 - 0.1 = 0.5$
839 \item[] $x_{3}(1) = 0.2 + 0.1 + 0.175 = 0.475$
840 \item[] $x_{4}(1) = 0.9 - 0.1 - 0.175 = 0.625$
843 This process is repeated for several iterations until all the states
844 of the nodes converge to the average of the initial measurements. We
845 note that our scheme is robust to the topology changes and the loss of
846 messages as discussed in details in the next section.
848 %-----------------------------------------------------------------------------------------------------
849 \section{Experimental Results}
851 In order to evaluate the performance of our approach, we have
852 implemented a simulation package using the discrete event simulator
853 OMNET++~\cite{8}. This package includes our asynchronous algorithm as
854 well as a synchronous one. As confirmed in previous related
855 works~\cite{2,20}, distributed approaches out perform centralised
856 approaches, in particular in noisy networks. For this reason, we have
857 not included in our comparison centralised approaches, and focuses
858 instead on synchronous distributed approaches that are more closed to
862 We performed several runs of the algorithms (an average of 100
863 runs). In each experimental run, the network graph is randomly
864 generated, where the nodes are distributed over a $[0, 100] \times [0,
865 100]$ field. The node communication range was set to 30. The initial
866 node measurements $z_{i}$ were also randomly generated. Each node is
867 aware of its immediate neighbours through a "hello" message. Once the
868 neighbourhood is identified, each node run the algorithm i.e., begins
869 exchanging data until convergence.
871 We studied the performance of our algorithm with regard to the following parameters:
874 \item Robustness in front of communication failures: we mainly varied
875 the probability of communication failure, noted $p$. This parameter
876 allows us to highlight the behaviour of our scheme in noisy
877 environment and in dynamic topologies.
879 \item Scalability: we varied the number of sensor nodes deployed in the same area to see how our proposed approach scales?
883 The main metrics we measured in this paper are: (a) the mean error
884 between the current estimate $x_{i}$ and the average of the initial
885 data, (b) the mean number of iterations necessary to reach convergence
886 and (c) the overall time before reaching the global convergence. We
887 note here that in asynchronous algorithms, there is no direct
888 correlation between the number of iterations and the total time to
889 convergence, contrary to synchronous approaches. In fact, as there are
890 no delays between nodes, the number of iterations could be relatively
891 high. This does not mean that the total time to convergence could be
892 long too. For this reason, we have made the distinction between the
893 number of iterations and the time taken to reach convergence. As we
894 run a discrete event simulation package, this time is the one given by
895 the discrete simulator OMNET++~\cite{8}; we named it {\it simulated
896 time}. For all the experiments, the global convergence state is said
897 to be reached when $\varepsilon_{i} =| x_{i} - \sum_{i=1}^{n} y_{i} /
898 n |$ becomes less than some fixed constant $\varepsilon$.
900 Note that, in the figures next sections, the points represent the obtained results and the curves are an extrapolation of these points.
905 \subsection{Basic Behaviour}
908 First, we show simulation results for the case where we have a fixed
909 topology with a fixed number of nodes (50 nodes) and $\varepsilon =
910 10^{-4}$. The mean error of the nodes $\varepsilon' = \sum_{i=1}^{n}
911 \varepsilon_{i} / n$ was plotted in Figure~\ref{Error}. As expected,
912 it can be seen that the convergence in the synchronous mode is faster
913 than the convergence in the asynchronous one. It is also noticed that
914 the two graphs have the same pace.
916 However, in many scenarios an exact average is not required, and one
917 may be willing to trade precision for simplicity. For instance,
918 minimizing the number of iterations to reduce the energy consumption
919 can be privileged in sensor networks applications where exact
920 averaging is not essential.
924 \includegraphics[scale=.55]{ErrorNbIteration.ps}
926 \caption{The Mean Error $\varepsilon$}
931 \subsection{Dynamic topology}
932 In a next step, we simulated the proposed sensor fusion scheme with
933 dynamically changing communication graphs. We generated the sequence
934 of communication graphs as follows: at each time step, each edge in
935 the graph is only available with a selected probability $p$,
936 independent of the other edges and all previous steps. To ensure the
937 jointly connected condition of the generated graphs, we selected a
938 period of time $\tau$ in which an edge cannot stay disconnected more
941 We fixed the number of sensor nodes to 50 and $\varepsilon =
942 10^{-4}$. In preliminary results, the period $\tau$ was chosen in a
943 way that is equal to three times the time of a communication. We show
944 in figure~\ref{Dynamic} and figure~\ref{DynamicTime} the variation of
945 the number of iterations and the time simulation with the probability
946 of link failure $p$. We notice that the number of iterations and the
947 overall time increase with the increase of the probability, but not in
952 % \begin{minipage}[t]{.5\textwidth}
955 \includegraphics[scale=.55]{DynamicTopology.ps}
956 \caption{Number of Iterations}
964 %\begin{minipage}[t]{.5\textwidth}
966 \includegraphics[scale=.55]{DynamicTopologyTime.ps}
968 \caption{Simulated Time}
974 Note that we also tried to run the synchronous algorithm with dynamic
975 topology changes, but the execution times were so prohibitive, that we
976 abandoned those experiments. These results confirm that synchronous
977 algorithms are infeasible for real sensor networks.
979 \subsection{Larger Sensor Network}
981 Our scheme can be applied to sensor networks where a large number of
982 sensor nodes are deployed, since it is fully distributed and there is
983 no centralized control. In our simulations we varied the number of
984 sensor nodes from 20 to 200 nodes, deployed in the region $[0, 100]
985 \times [0, 100]$, we selected for all nodes $i$, $\varepsilon =
988 However, as shown in the two Figures (Figure~\ref{Density} and
989 Figure~\ref{DensityTime}), as the number of sensor nodes increases,
990 the average of the iterations number as well as the time needed to
991 reach global convergence decreases in the two cases synchronous and
992 asynchronous. We notice that in the synchronous mode we obtained less
993 number of iterations, on the other hand it takes more time to reach
994 the global convergence than the asynchronous one.
998 %\begin{minipage}[b]{.5\textwidth}
1001 \includegraphics[scale=.55]{Density.ps}
1002 \caption{Number of iterations}
1008 % \begin{minipage}[b]{.50\textwidth}
1011 \includegraphics[scale=.55]{TimeDensity.ps}
1013 \caption{Simulated Time}
1020 \section{Further Discussions and Comparison to Other Existing Works}
1022 In this section, we give further consideration to our data fusion
1023 scheme from the viewpoints of robustness to the delays and loss of
1024 messages and energy efficiency in comparison to other existing works.
1026 Sensor nodes are small-scale devices. Such small devices are very
1027 limited in the amount of energy they can store or harvest from the
1028 environment. Thus, energy efficiency is a major concern in a sensor
1029 network. In addition, many thousands of sensors may have to be
1030 deployed for a given task. An individual sensor's small effective
1031 range relative to a large area of interest makes this a requirement.
1033 Therefore, scalability is another critical factor in the network
1034 design. Sensor networks are subject to frequent partial failures such
1035 as exhausted batteries, nodes destroyed due to environmental factors,
1036 or communication failures due to obstacles in the environment. Message
1037 delays can be rather high in sensor networks due to their typically
1038 limited communication capacity which is shared by nodes within
1039 communication range of each other. The overall operation of the sensor
1040 network should be robust despite such partial failures.
1042 In our scheme, we presented a scalable asynchronous method for
1043 averaging data fusion in sensor networks. The simulations we
1044 conducted show that, the higher the density of the deployed nodes,
1045 the more the precise of the estimation would be. On the other hand,
1046 our algorithm is totally asynchronous, where we consider delay
1047 transmission and loss of messages in the proposed model. These
1048 aspects which are highly important are not taken into account in
1049 previous sensor fusion works~\cite{2,18}.
1051 Another important practical issue in sensor network is the power
1052 efficiency. Optimizing the energy consumption in sensor networks is
1053 related to minimize the number of the network communications as the
1054 radio is the main energy consumer in a sensor
1055 node~\cite{12}. Considering the distributed iterative procedure for
1056 calculating averages, the only way to minimize the energy consumption
1057 is to reduce the number of iterations before attending the
1058 convergence. To show how well our algorithm saves energy, we compared
1059 our obtained results to those reported by another diffusive scheme
1060 for average computation in sensor networks~\cite{2}. For instance, in
1061 a static topology our algorithm converges after $69$ iterations with
1062 a mean error of $10^{-4}$ while the best results in the second
1063 approach reached $85$ iterations for the same mean error. For the
1064 dynamic topology mode, we obtained $105$ iterations, mean error
1065 $10^{-4}$ and probability of link failure $0.25$, while the number of
1066 iterations is very high ($\approx 300$ iterations) in~\cite{2}. In
1067 figure~\ref{} and figure~\ref{} we present a comparison between our
1068 approach and the iterative solution presented
1069 in~\cite{2}. For~\cite{2} we used the metropolis weights that gives
1070 best convergence and number of iterations results.
1077 \section{Conclusion and Future Work}
1078 In this chapter, we introduced a distributed asynchronous average
1079 consensus in sensor networks. Our approach is based on data diffusion;
1080 the nodes cooperate and exchange their information only with their
1081 direct instantaneous neighbours. In contrast to existing works, our
1082 algorithm does not rely on synchronization nor on the knowledge of the
1083 global topology. We prove that under suitable assumptions, our
1084 algorithm achieves the global convergence in the sense that, after
1085 some iterations, each node has an estimation of the average consensus
1086 overall the whole network. To show the effectiveness of our algorithm,
1087 we conducted series of simulations and studied our algorithm under
1090 In our scenario, we have focused on developing a reliable and robust
1091 algorithm from the view points of asynchronism and fault tolerance in
1092 a dynamically changing topology. We have taken into account two points
1093 which don't have been previously addressed by other authors, namely
1094 the delays between nodes and the loss of messages. Knowing that in
1095 real sensor networks the nodes are prone to failures. One of the near
1096 future goals is to allow nodes to be dynamically added and removed
1097 during the execution of the algorithm. We also plan to test our
1098 algorithm in a real-world sensor network.
1100 \bibliographystyle{unsrt}
1101 \bibliography{references}