From 9458c6e7496a265a0e594312416aa8dd608e4296 Mon Sep 17 00:00:00 2001 From: Michel Salomon Date: Mon, 7 Jul 2014 15:38:08 +0200 Subject: [PATCH] Relecture apres ajouts Ali --- article.tex | 401 ++++++++++++++++++++++++++-------------------------- 1 file changed, 204 insertions(+), 197 deletions(-) diff --git a/article.tex b/article.tex index 85045d3..36c5dcf 100644 --- a/article.tex +++ b/article.tex @@ -152,9 +152,9 @@ the network lifetime by using an optimized multirounds scheduling. The remainder of the paper is organized as follows. The next section % Section~\ref{rw} -reviews the related works in the field. Section~\ref{pd} is devoted to the +reviews the related works in the field. Section~\ref{pd} is devoted to the description of MuDiLCO protocol. Section~\ref{exp} shows the simulation results -obtained using the discrete event simulator OMNeT++ \cite{varga}. They fully +obtained using the discrete event simulator OMNeT++ \cite{varga}. They fully demonstrate the usefulness of the proposed approach. Finally, we give concluding remarks and some suggestions for future works in Section~\ref{sec:conclusion}. @@ -199,20 +199,20 @@ size increases. The first algorithms proposed in the literature consider that the cover sets are disjoint: a sensor node appears in exactly one of the generated cover sets. For -instance, Slijepcevic and Potkonjak \cite{Slijepcevic01powerefficient} proposed +instance, Slijepcevic and Potkonjak \cite{Slijepcevic01powerefficient} proposed an algorithm, which allocates sensor nodes in mutually independent sets to monitor an area divided into several fields. Their algorithm builds a cover set by including in priority the sensor nodes which cover critical fields, that is -to say fields that are covered by the smallest number of sensors. The time -complexity of their heuristic is $O(n^2)$ where $n$ is the number of -sensors. Abrams et al.~\cite{abrams2004set} designed three approximation -algorithms for a variation of the set k-cover problem, where the objective is to -partition the sensors into covers such that the number of covers that include an -area, summed over all areas, is maximized. Their work builds upon previous work +to say fields that are covered by the smallest number of sensors. The time +complexity of their heuristic is $O(n^2)$ where $n$ is the number of sensors. +Abrams et al.~\cite{abrams2004set} designed three approximation algorithms for a +variation of the set k-cover problem, where the objective is to partition the +sensors into covers such that the number of covers that include an area, summed +over all areas, is maximized. Their work builds upon previous work in~\cite{Slijepcevic01powerefficient} and the generated cover sets do not provide complete coverage of the monitoring zone. -\cite{cardei2005improving} proposed a method to efficiently compute the maximum +\cite{cardei2005improving} proposed a method to efficiently compute the maximum number of disjoint set covers such that each set can monitor all targets. They first transform the problem into a maximum flow problem, which is formulated as a mixed integer programming (MIP). Then their heuristic uses the output of the @@ -222,15 +222,15 @@ number of set covers slightly larger compared to complexity of the mixed integer programming resolution. Zorbas et al. \cite{zorbas2010solving} presented a centralized greedy algorithm -for the efficient production of both node disjoint and non-disjoint cover -sets. Compared to algorithm's results of Slijepcevic and Potkonjak +for the efficient production of both node disjoint and non-disjoint cover sets. +Compared to algorithm's results of Slijepcevic and Potkonjak \cite{Slijepcevic01powerefficient}, their heuristic produces more disjoint cover -sets with a slight growth rate in execution time. When producing non-disjoint +sets with a slight growth rate in execution time. When producing non-disjoint cover sets, both Static-CCF and Dynamic-CCF algorithms, where CCF means that they use a cost function called Critical Control Factor, provide cover sets -offering longer network lifetime than those produced by -\cite{cardei2005energy}. Also, they require a smaller number of node -participations in order to achieve these results. +offering longer network lifetime than those produced by \cite{cardei2005energy}. +Also, they require a smaller number of node participations in order to achieve +these results. In the case of non-disjoint algorithms \cite{pujari2011high}, sensors may participate in more than one cover set. In some cases, this may prolong the @@ -241,26 +241,34 @@ scheduling policies are less resilient and less reliable because a sensor may be involved in more than one cover sets. For instance, Cardei et al.~\cite{cardei2005energy} present a linear programming (LP) solution and a greedy approach to extend the sensor network lifetime by organizing the sensors -into a maximal number of non-disjoint cover sets. Simulation results show that +into a maximal number of non-disjoint cover sets. Simulation results show that by allowing sensors to participate in multiple sets, the network lifetime increases compared with related work~\cite{cardei2005improving}. In~\cite{berman04}, the authors have formulated the lifetime problem and -suggested another (LP) technique to solve this problem. A centralized solution +suggested another (LP) technique to solve this problem. A centralized solution based on the Garg-K\"{o}nemann algorithm~\cite{garg98}, provably near the optimal solution, is also proposed. -In~\cite{yang2014maximum}, The authors are proposed a linear programming approach for selecting the minimum number of sensor nodes in working station so as to preserve a maximum coverage and extend lifetime of the network. Cheng et al.~\cite{cheng2014energy} are proposed a heuristic algorithm called Cover Sets Balance (CSB) algorithm to choose a set of active nodes using the tuple (data coverage range, residual energy). Then, they are introduced a new Correlated Node Set Computing (CNSC) algorithm to find the correlated node set for a given node. After that, they are proposed a High Residual Energy First (HREF) node selection algorithm to minimize the number of active nodes so as to prolong the network lifetime. -In~\cite{castano2013column,rossi2012exact,deschinkel2012column}, The authors are proposed a centralized methods based on column generation approach to extend lifetime in wireless sensor networks while coverage preservation. - +In~\cite{yang2014maximum}, the authors have proposed a linear programming +approach for selecting the minimum number of working sensor nodes, in order to +as to preserve a maximum coverage and extend lifetime of the network. Cheng et +al.~\cite{cheng2014energy} have defined a heuristic algorithm called Cover Sets +Balance (CSB), which choose a set of active nodes using the tuple (data coverage +range, residual energy). Then, they have introduced a new Correlated Node Set +Computing (CNSC) algorithm to find the correlated node set for a given node. +After that, they proposed a High Residual Energy First (HREF) node selection +algorithm to minimize the number of active nodes so as to prolong the network +lifetime. Various centralized methods based on column generation approaches have +also been proposed~\cite{castano2013column,rossi2012exact,deschinkel2012column}. \subsection{Distributed approaches} %{\bf Distributed approaches} In distributed and localized coverage algorithms, the required computation to schedule the activity of sensor nodes will be done by the cooperation among neighboring nodes. These algorithms may require more computation power for the -processing by the cooperating sensor nodes, but they are more scalable for -large WSNs. Localized and distributed algorithms generally result in -non-disjoint set covers. +processing by the cooperating sensor nodes, but they are more scalable for large +WSNs. Localized and distributed algorithms generally result in non-disjoint set +covers. Some distributed algorithms have been developed in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02, yardibi2010distributed} @@ -273,7 +281,7 @@ area \cite{Berman05efficientenergy} or maximum uncovered targets \cite{lu2003coverage}. In \cite{Tian02}, the scheduling scheme is divided into rounds, where each round has a self-scheduling phase followed by a sensing phase. Each sensor broadcasts a message containing the node~ID and the node -location to its neighbors at the beginning of each round. A sensor determines +location to its neighbors at the beginning of each round. A sensor determines its status by a rule named off-duty eligible rule, which tells him to turn off if its sensing area is covered by its neighbors. A back-off scheme is introduced to let each sensor delay the decision process with a random period of time, in @@ -291,7 +299,7 @@ not require location information of sensors while maintaining connectivity and satisfying a user defined coverage target. In DASSA, nodes use the residual energy levels and feedback from the sink for scheduling the activity of their neighbors. This feedback mechanism reduces the randomness in scheduling that -would otherwise occur due to the absence of location information. In +would otherwise occur due to the absence of location information. In \cite{ChinhVu}, the author proposed a novel distributed heuristic, called Distributed Energy-efficient Scheduling for k-coverage (DESK), which ensures that the energy consumption among the sensors is balanced and the lifetime @@ -315,7 +323,7 @@ More recently, Shibo et al. \cite{Shibo} expressed the coverage problem as a minimum weight submodular set cover problem and proposed a Distributed Truncated Greedy Algorithm (DTGA) to solve it. They take advantage from both temporal and spatial correlations between data sensed by different sensors, and leverage -prediction, to improve the lifetime. In \cite{xu2001geography}, Xu et +prediction, to improve the lifetime. In \cite{xu2001geography}, Xu et al. proposed an algorithm, called Geographical Adaptive Fidelity (GAF), which uses geographic location information to divide the area of interest into fixed square grids. Within each grid, it keeps only one node staying awake to take the @@ -328,7 +336,7 @@ randomized \cite{Ye03} or regulated \cite{cardei2005maximum} over time. The MuDiLCO protocol (for Multiperiod Distributed Lifetime Coverage Optimization protocol) presented in this paper is an extension of the approach introduced -in~\cite{idrees2014coverage}. In~\cite{idrees2014coverage}, the protocol is +in~\cite{idrees2014coverage}. In~\cite{idrees2014coverage}, the protocol is deployed over only two subregions. Simulation results have shown that it was more interesting to divide the area into several subregions, given the computation complexity. Compared to our previous paper, in this one we study the @@ -418,18 +426,17 @@ is the subject of another study not presented here. \subsection{Background idea} -The area of interest can be divided using the divide-and-conquer -strategy into smaller areas, called subregions, and then our MuDiLCO -protocol will be implemented in each subregion in a distributed way. +The area of interest can be divided using the divide-and-conquer strategy into +smaller areas, called subregions, and then our MuDiLCO protocol will be +implemented in each subregion in a distributed way. -As can be seen in Figure~\ref{fig2}, our protocol works in periods -fashion, where each is divided into 4 phases: Information~Exchange, -Leader~Election, Decision, and Sensing. Each sensing phase may be -itself divided into $T$ rounds and for each round a set of sensors -(said a cover set) is responsible for the sensing task. +As can be seen in Figure~\ref{fig2}, our protocol works in periods fashion, +where each is divided into 4 phases: Information~Exchange, Leader~Election, +Decision, and Sensing. Each sensing phase may be itself divided into $T$ rounds +and for each round a set of sensors (said a cover set) is responsible for the +sensing task. \begin{figure}[ht!] -\centering -\includegraphics[width=95mm]{Modelgeneral.pdf} % 70mm +\centering \includegraphics[width=100mm]{Modelgeneral.pdf} % 70mm \caption{The MuDiLCO protocol scheme executed on each node} \label{fig2} \end{figure} @@ -439,38 +446,36 @@ itself divided into $T$ rounds and for each round a set of sensors % set cover responsible for the sensing task. %For each round a set of sensors (said a cover set) is responsible for the sensing task. -This protocol is reliable against an unexpected node failure, because -it works in periods. On the one hand, if a node failure is detected -before making the decision, the node will not participate to this -phase, and, on the other hand, if the node failure occurs after the -decision, the sensing task of the network will be temporarily -affected: only during the period of sensing until a new period starts. +This protocol is reliable against an unexpected node failure, because it works +in periods. On the one hand, if a node failure is detected before making the +decision, the node will not participate to this phase, and, on the other hand, +if the node failure occurs after the decision, the sensing task of the network +will be temporarily affected: only during the period of sensing until a new +period starts. -The energy consumption and some other constraints can easily be taken -into account, since the sensors can update and then exchange their -information (including their residual energy) at the beginning of each -period. However, the pre-sensing phases (Information Exchange, Leader -Election, and Decision) are energy consuming for some nodes, even when -they do not join the network to monitor the area. +The energy consumption and some other constraints can easily be taken into +account, since the sensors can update and then exchange their information +(including their residual energy) at the beginning of each period. However, the +pre-sensing phases (Information Exchange, Leader Election, and Decision) are +energy consuming for some nodes, even when they do not join the network to +monitor the area. %%%%%%%%%%%%%%%%%parler optimisation%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -We define two types of packets that will be used by the proposed -protocol: +We define two types of packets that will be used by the proposed protocol: \begin{enumerate}[(a)] -\item INFO packet: a such packet will be sent by each sensor node to - all the nodes inside a subregion for information exchange. -\item Active-Sleep packet: sent by the leader to all the nodes inside a - subregion to inform them to remain Active or to go Sleep during the - sensing phase. +\item INFO packet: a such packet will be sent by each sensor node to all the + nodes inside a subregion for information exchange. +\item Active-Sleep packet: sent by the leader to all the nodes inside a + subregion to inform them to remain Active or to go Sleep during the sensing + phase. \end{enumerate} There are five status for each sensor node in the network: \begin{enumerate}[(a)] -\item LISTENING: sensor node is waiting for a decision (to be active - or not); -\item COMPUTATION: sensor node has been elected as leader and applies - the optimization process; +\item LISTENING: sensor node is waiting for a decision (to be active or not); +\item COMPUTATION: sensor node has been elected as leader and applies the + optimization process; \item ACTIVE: sensor node participate to the monitoring of the area; \item SLEEP: sensor node is turned off to save energy; \item COMMUNICATION: sensor node is transmitting or receiving packet. @@ -494,17 +499,16 @@ corresponds to the time that a sensor can live in the active mode. \subsection{Leader Election phase} -This step consists in choosing the Wireless Sensor Node Leader (WSNL), -which will be responsible for executing the coverage algorithm. Each -subregion in the area of interest will select its own WSNL -independently for each period. All the sensor nodes cooperate to -elect a WSNL. The nodes in the same subregion will select the leader -based on the received informations from all other nodes in the same -subregion. The selection criteria are, in order of importance: larger -number of neighbors, larger remaining energy, and then in case of -equality, larger index. Observations on previous simulations suggest -to use the number of one-hop neighbors as the primary criterion to -reduce energy consumption due to the communications. +This step consists in choosing the Wireless Sensor Node Leader (WSNL), which +will be responsible for executing the coverage algorithm. Each subregion in the +area of interest will select its own WSNL independently for each period. All +the sensor nodes cooperate to elect a WSNL. The nodes in the same subregion +will select the leader based on the received informations from all other nodes +in the same subregion. The selection criteria are, in order of importance: +larger number of neighbors, larger remaining energy, and then in case of +equality, larger index. Observations on previous simulations suggest to use the +number of one-hop neighbors as the primary criterion to reduce energy +consumption due to the communications. %the more priority selection factor is the number of $1-hop$ neighbors, $NBR j$, which can minimize the energy consumption during the communication Significantly. %The pseudo-code for leader election phase is provided in Algorithm~1. @@ -513,29 +517,27 @@ reduce energy consumption due to the communications. \subsection{Decision phase} -Each WSNL will solve an integer program to select which cover sets -will be activated in the following sensing phase to cover the -subregion to which it belongs. The integer program will produce $T$ -cover sets, one for each round. The WSNL will send an Active-Sleep -packet to each sensor in the subregion based on the algorithm's -results, indicating if the sensor should be active or not in each -round of the sensing phase. The integer program is based on the model -proposed by \cite{pedraza2006} with some modification, where the -objective is to find a maximum number of disjoint cover sets. To -fulfill this goal, the authors proposed an integer program which -forces undercoverage and overcoverage of targets to become minimal at -the same time. They use binary variables $x_{jl}$ to indicate if -sensor $j$ belongs to cover set $l$. In our model, we consider binary -variables $X_{t,j}$ to determine the possibility of activation of -sensor $j$ during the round $t$ of a given sensing phase. We also -consider primary points as targets. The set of primary points is -denoted by $P$ and the set of sensors by $J$. Only sensors able to be -alive during at least one round are involved in the integer program. +Each WSNL will solve an integer program to select which cover sets will be +activated in the following sensing phase to cover the subregion to which it +belongs. The integer program will produce $T$ cover sets, one for each round. +The WSNL will send an Active-Sleep packet to each sensor in the subregion based +on the algorithm's results, indicating if the sensor should be active or not in +each round of the sensing phase. The integer program is based on the model +proposed by \cite{pedraza2006} with some modification, where the objective is to +find a maximum number of disjoint cover sets. To fulfill this goal, the authors +proposed an integer program which forces undercoverage and overcoverage of +targets to become minimal at the same time. They use binary variables $x_{jl}$ +to indicate if sensor $j$ belongs to cover set $l$. In our model, we consider +binary variables $X_{t,j}$ to determine the possibility of activation of sensor +$j$ during the round $t$ of a given sensing phase. We also consider primary +points as targets. The set of primary points is denoted by $P$ and the set of +sensors by $J$. Only sensors able to be alive during at least one round are +involved in the integer program. %parler de la limite en energie Et pour un round -For a primary point $p$, let $\alpha_{j,p}$ denote the indicator -function of whether the point $p$ is covered, that is: +For a primary point $p$, let $\alpha_{j,p}$ denote the indicator function of +whether the point $p$ is covered, that is: \begin{equation} \alpha_{j,p} = \left \{ \begin{array}{l l} @@ -565,10 +567,10 @@ We define the Overcoverage variable $\Theta_{t,p}$ as: \end{array} \right. \label{eq13} \end{equation} -More precisely, $\Theta_{t,p}$ represents the number of active sensor -nodes minus one that cover the primary point $p$ during the round -$t$. The Undercoverage variable $U_{t,p}$ of the primary point $p$ -during round $t$ is defined by: +More precisely, $\Theta_{t,p}$ represents the number of active sensor nodes +minus one that cover the primary point $p$ during the round $t$. The +Undercoverage variable $U_{t,p}$ of the primary point $p$ during round $t$ is +defined by: \begin{equation} U_{t,p} = \left \{ \begin{array}{l l} @@ -609,41 +611,38 @@ U_{t,p} \in \lbrace0,1\rbrace, \hspace{10 mm}\forall p \in P, t = 1,\dots,T \la %(W_{\theta}+W_{\psi} = P) \label{eq19} %\end{equation} - \begin{itemize} -\item $X_{t,j}$: indicates whether or not the sensor $j$ is actively - sensing during the round $t$ (1 if yes and 0 if not); -\item $\Theta_{t,p}$ - {\it overcoverage}: the number of sensors minus - one that are covering the primary point $p$ during the round $t$; -\item $U_{t,p}$ - {\it undercoverage}: indicates whether or not the - primary point $p$ is being covered during the round $t$ (1 if not - covered and 0 if covered). +\item $X_{t,j}$: indicates whether or not the sensor $j$ is actively sensing + during the round $t$ (1 if yes and 0 if not); +\item $\Theta_{t,p}$ - {\it overcoverage}: the number of sensors minus one that + are covering the primary point $p$ during the round $t$; +\item $U_{t,p}$ - {\it undercoverage}: indicates whether or not the primary + point $p$ is being covered during the round $t$ (1 if not covered and 0 if + covered). \end{itemize} -The first group of constraints indicates that some primary point $p$ -should be covered by at least one sensor and, if it is not always the -case, overcoverage and undercoverage variables help balancing the -restriction equations by taking positive values. The constraint given -by equation~(\ref{eq144}) guarantees that the sensor has enough energy -($RE_j$ corresponds to its remaining energy) to be alive during the -selected rounds knowing that $E_{R}$ is the amount of energy required -to be alive during one round. - -There are two main objectives. First, we limit the overcoverage of -primary points in order to activate a minimum number of sensors. -Second we prevent the absence of monitoring on some parts of the -subregion by minimizing the undercoverage. The weights $W_\theta$ and -$W_U$ must be properly chosen so as to guarantee that the maximum -number of points are covered during each round. In our simulations -priority is given to the coverage by choosing $W_{\theta}$ very large -compared to $W_U$. +The first group of constraints indicates that some primary point $p$ should be +covered by at least one sensor and, if it is not always the case, overcoverage +and undercoverage variables help balancing the restriction equations by taking +positive values. The constraint given by equation~(\ref{eq144}) guarantees that +the sensor has enough energy ($RE_j$ corresponds to its remaining energy) to be +alive during the selected rounds knowing that $E_{R}$ is the amount of energy +required to be alive during one round. + +There are two main objectives. First, we limit the overcoverage of primary +points in order to activate a minimum number of sensors. Second we prevent the +absence of monitoring on some parts of the subregion by minimizing the +undercoverage. The weights $W_\theta$ and $W_U$ must be properly chosen so as +to guarantee that the maximum number of points are covered during each round. In +our simulations priority is given to the coverage by choosing $W_{\theta}$ very +large compared to $W_U$. %The Active-Sleep packet includes the schedule vector with the number of rounds that should be applied by the receiving sensor node during the sensing phase. \subsection{Sensing phase} The sensing phase consists of $T$ rounds. Each sensor node in the subregion will receive an Active-Sleep packet from WSNL, informing it to stay awake or to go to -sleep for each round of the sensing phase. Algorithm~\ref{alg:MuDiLCO}, which +sleep for each round of the sensing phase. Algorithm~\ref{alg:MuDiLCO}, which will be executed by each node at the beginning of a period, explains how the Active-Sleep packet is obtained. @@ -744,17 +743,25 @@ $w_{U}$ & $|P^2|$ % is used to refer this table in the text \end{table} -Our protocol is declined into four versions: MuDiLCO-1, MuDiLCO-3, MuDiLCO-5, +Our protocol is declined into four versions: MuDiLCO-1, MuDiLCO-3, MuDiLCO-5, and MuDiLCO-7, corresponding respectively to $T=1,3,5,7$ ($T$ the number of -rounds in one sensing period). In the following, the general case will be -denoted by MuDiLCO-T. We are studied the impact of dividing the sensing feild (using Divide and Conquer method) on the performance of our MuDiLCO-T protocol with different network sizes, and we are found that as the number of subregions increase, the network lifetime increase and the MuDiLCO-T protocol become more powerful against the network disconnection. -This subdivision should be stopped when there is no benefit from the optimization, therefore Our MuDiLCO-T protocol is distributed over 16 rather than 32 subregions because there is a balance between the benefit from the optimization and the execution time is needed to sove it. We compare MuDiLCO-T with two other methods. The first -method, called DESK and proposed by \cite{ChinhVu} is a full distributed +rounds in one sensing period). In the following, the general case will be +denoted by MuDiLCO-T and we will make comparisons with two other methods. The +first method, called DESK and proposed by \cite{ChinhVu}, is a full distributed coverage algorithm. The second method, called GAF~\cite{xu2001geography}, consists in dividing the region into fixed squares. During the decision phase, in each square, one sensor is then chosen to remain active during the sensing phase time. +Some preliminary experiments were performed to study the choice of the number of +subregions which subdivide the sensing field, considering different network +sizes. They show that as the number of subregions increases, so does the network +lifetime. Moreover, it makes the MuDiLCO-T protocol more robust against random +network disconnection due to node failures. However, too much subdivisions +reduces the advantage of the optimization. In fact, there is a balance between +the benefit from the optimization and the execution time needed to solve +it. Therefore, we have set the number of subregions to 16 rather than 32. + \subsection{Energy Model} We use an energy consumption model proposed by~\cite{ChinhVu} and based on @@ -826,7 +833,6 @@ energy consumed in active state (9.72 mW) by the time in second for one round (3600 seconds). According to the interval of initial energy, a sensor may be alive during at most 20 rounds. - \subsection{Metrics} To evaluate our approach we consider the following performance metrics: @@ -843,7 +849,8 @@ To evaluate our approach we consider the following performance metrics: \end{equation*} where $n^t$ is the number of covered grid points by the active sensors of all subregions during round $t$ in the current sensing phase and $N$ is total number -of grid points in the sensing field of the network. In our simulation $N = 51 \times 26 = 1326$ grid points. +of grid points in the sensing field of the network. In our simulations $N = 51 +\times 26 = 1326$ grid points. %The accuracy of this method depends on the distance between grids. In our %simulations, the sensing field has been divided into 50 by 25 grid points, which means %there are $51 \times 26~ = ~ 1326$ points in total. @@ -924,16 +931,16 @@ can notice that for the first thirty rounds both DESK and GAF provide a coverage which is a little bit better than the one of MuDiLCO-T. This is due to the fact that in comparison with MuDiLCO that uses optimization to put in SLEEP status redundant sensors, more sensor nodes remain active with DESK and GAF. As a -consequence, when the number of rounds increases, a larger number of nodes +consequence, when the number of rounds increases, a larger number of node failures can be observed in DESK and GAF, resulting in a faster decrease of the coverage ratio. Furthermore, our protocol allows to maintain a coverage ratio -greater than 50\% for far more rounds. Overall, the proposed sensor activity +greater than 50\% for far more rounds. Overall, the proposed sensor activity scheduling based on optimization in MuDiLCO maintains higher coverage ratios of the area of interest for a larger number of rounds. It also means that MuDiLCO-T -save more energy, with less dead nodes, at most for several rounds, and thus +saves more energy, with less dead nodes, at most for several rounds, and thus should extend the network lifetime. -\begin{figure}[h!] +\begin{figure}[t!] \centering \includegraphics[scale=0.5] {R1/CR.pdf} \caption{Average coverage ratio for 150 deployed nodes} @@ -954,7 +961,7 @@ Obviously, in that case DESK and GAF have less active nodes, since they have activated many nodes at the beginning. Anyway, MuDiLCO-T activates the available nodes in a more efficient manner. -\begin{figure}[h!] +\begin{figure}[t!] \centering \includegraphics[scale=0.5]{R1/ASR.pdf} \caption{Active sensors ratio for 150 deployed nodes} @@ -977,7 +984,7 @@ still connected. %%% The optimization effectively continues as long as a network in a subregion is still connected. A VOIR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -\begin{figure}[h!] +\begin{figure}[t!] \centering \includegraphics[scale=0.5]{R1/SR.pdf} \caption{Cumulative percentage of stopped simulation runs for 150 deployed nodes } @@ -1022,55 +1029,53 @@ sensors to consider in the integer program. We observe the impact of the network size and of the number of rounds on the computation time. Figure~\ref{fig77} gives the average execution times in -seconds (times needed to solve optimization problem) for different values of -$T$. The original execution time is computed on a laptop DELL with Intel -Core~i3~2370~M (2.4 GHz) processor (2 cores) and the MIPS (Million Instructions -Per Second) rate equal to 35330. To be consistent with the use of a sensor node -with Atmels AVR ATmega103L microcontroller (6 MHz) and a MIPS rate equal to 6 to -run the optimization resolution, this time is multiplied by 2944.2 $\left( +seconds (needed to solve optimization problem) for different values of $T$. The +original execution time is computed on a laptop DELL with Intel Core~i3~2370~M +(2.4 GHz) processor (2 cores) and the MIPS (Million Instructions Per Second) +rate equal to 35330. To be consistent with the use of a sensor node with Atmels +AVR ATmega103L microcontroller (6 MHz) and a MIPS rate equal to 6 to run the +optimization resolution, this time is multiplied by 2944.2 $\left( \frac{35330}{2} \times \frac{1}{6} \right)$ and reported on Figure~\ref{fig77} -for different network sizes. +for different network sizes. -\begin{figure}[h!] +\begin{figure}[t!] \centering \includegraphics[scale=0.5]{R1/T.pdf} \caption{Execution Time (in seconds)} \label{fig77} \end{figure} -As expected, the execution time increases with the number of rounds -$T$ taken into account for scheduling of the sensing phase. The times -obtained for $T=1,3$ or $5$ seems bearable, but for $T=7$ they become -quickly unsuitable for a sensor node, especially when the sensor -network size increases. Again, we can notice that if we want to -schedule the nodes activities for a large number of rounds, we need to -choose a relevant number of subregion in order to avoid a complicated -and cumbersome optimization. On the one hand, a large value for $T$ -permits to reduce the energy-overhead due to the three pre-sensing -phases, on the other hand a leader node may waste a considerable -amount of energy to solve the optimization problem. +As expected, the execution time increases with the number of rounds $T$ taken +into account for scheduling of the sensing phase. The times obtained for $T=1,3$ +or $5$ seems bearable, but for $T=7$ they become quickly unsuitable for a sensor +node, especially when the sensor network size increases. Again, we can notice +that if we want to schedule the nodes activities for a large number of rounds, +we need to choose a relevant number of subregion in order to avoid a complicated +and cumbersome optimization. On the one hand, a large value for $T$ permits to +reduce the energy-overhead due to the three pre-sensing phases, on the other +hand a leader node may waste a considerable amount of energy to solve the +optimization problem. %While MuDiLCO-1, 3, and 5 solves the optimization process with suitable execution times to be used on wireless sensor network because it distributed on larger number of small subregions as well as it is used acceptable number of round(s) T. We think that in distributed fashion the solving of the optimization problem to produce T rounds in a subregion can be tackled by sensor nodes. Overall, to be able to deal with very large networks, a distributed method is clearly required. \subsection{Network Lifetime} -The next two figures, Figures~\ref{fig8}(a) and \ref{fig8}(b), -illustrate the network lifetime for different network sizes, -respectively for $Lifetime_{95}$ and $Lifetime_{50}$. Both figures -show that the network lifetime increases together with the number of -sensor nodes, whatever the protocol, thanks to the node density which -result in more and more redundant nodes that can be deactivated and -thus save energy. Compared to the other approaches, our MuDiLCO-T -protocol maximizes the lifetime of the network. In particular the -gain in lifetime for a coverage over 95\% is greater than 38\% when -switching from GAF to MuDiLCO-3. The slight decrease that can bee -observed for MuDiLCO-7 in case of $Lifetime_{95}$ with large wireless -sensor networks result from the difficulty of the optimization problem -to be solved by the integer program. This point was already noticed -in subsection \ref{subsec:EC} devoted to the energy consumption, since -network lifetime and energy consumption are directly linked. - -\begin{figure}[h!] +The next two figures, Figures~\ref{fig8}(a) and \ref{fig8}(b), illustrate the +network lifetime for different network sizes, respectively for $Lifetime_{95}$ +and $Lifetime_{50}$. Both figures show that the network lifetime increases +together with the number of sensor nodes, whatever the protocol, thanks to the +node density which result in more and more redundant nodes that can be +deactivated and thus save energy. Compared to the other approaches, our +MuDiLCO-T protocol maximizes the lifetime of the network. In particular the +gain in lifetime for a coverage over 95\% is greater than 38\% when switching +from GAF to MuDiLCO-3. The slight decrease that can bee observed for MuDiLCO-7 +in case of $Lifetime_{95}$ with large wireless sensor networks result from the +difficulty of the optimization problem to be solved by the integer program. +This point was already noticed in subsection \ref{subsec:EC} devoted to the +energy consumption, since network lifetime and energy consumption are directly +linked. + +\begin{figure}[t!] \centering \begin{tabular}{cl} \parbox{9.5cm}{\includegraphics[scale=0.5]{R1/LT95.pdf}} & (a) \\ @@ -1093,43 +1098,45 @@ network lifetime and energy consumption are directly linked. \section{Conclusion and Future Works} \label{sec:conclusion} -In this paper, we have addressed the problem of the coverage and the -lifetime optimization in wireless sensor networks. This is a key issue -as sensor nodes have limited resources in terms of memory, energy, and -computational power. To cope with this problem, the field of sensing -is divided into smaller subregions using the concept of -divide-and-conquer method, and then we propose a protocol which -optimizes coverage and lifetime performances in each subregion. Our -protocol, called MuDiLCO (Multiperiod Distributed Lifetime Coverage -Optimization) combines two efficient techniques: network leader -election and sensor activity scheduling. +In this paper, we have addressed the problem of the coverage and the lifetime +optimization in wireless sensor networks. This is a key issue as sensor nodes +have limited resources in terms of memory, energy, and computational power. To +cope with this problem, the field of sensing is divided into smaller subregions +using the concept of divide-and-conquer method, and then we propose a protocol +which optimizes coverage and lifetime performances in each subregion. Our +protocol, called MuDiLCO (Multiperiod Distributed Lifetime Coverage +Optimization) combines two efficient techniques: network leader election and +sensor activity scheduling. %, where the challenges %include how to select the most efficient leader in each subregion and %the best cover sets %of active nodes that will optimize the network lifetime %while taking the responsibility of covering the corresponding %subregion using more than one cover set during the sensing phase. -The activity scheduling in each subregion works in periods, where each -period consists of four phases: (i) Information Exchange, (ii) Leader -Election, (iii) Decision Phase to plan the activity of the sensors -over $T$ rounds (iv) Sensing Phase itself divided into T rounds. - -Simulations results show the relevance of the proposed protocol in -terms of lifetime, coverage ratio, active sensors ratio, energy -consumption, execution time. Indeed, when dealing with large wireless -sensor networks, a distributed approach like the one we propose allows -to reduce the difficulty of a single global optimization problem by -partitioning it in many smaller problems, one per subregion, that can -be solved more easily. Nevertheless, results also show that it is not -possible to plan the activity of sensors over too many rounds, because -the resulting optimization problem leads to too high resolution time -and thus to an excessive energy consumption. +The activity scheduling in each subregion works in periods, where each period +consists of four phases: (i) Information Exchange, (ii) Leader Election, (iii) +Decision Phase to plan the activity of the sensors over $T$ rounds (iv) Sensing +Phase itself divided into T rounds. + +Simulations results show the relevance of the proposed protocol in terms of +lifetime, coverage ratio, active sensors ratio, energy consumption, execution +time. Indeed, when dealing with large wireless sensor networks, a distributed +approach like the one we propose allows to reduce the difficulty of a single +global optimization problem by partitioning it in many smaller problems, one per +subregion, that can be solved more easily. Nevertheless, results also show that +it is not possible to plan the activity of sensors over too many rounds, because +the resulting optimization problem leads to too high resolution time and thus to +an excessive energy consumption. %In future work, we plan to study and propose adjustable sensing range coverage optimization protocol, which computes all active sensor schedules in one time, by using %optimization methods. This protocol can prolong the network lifetime by minimizing the number of the active sensor nodes near the borders by optimizing the sensing range of sensor nodes. % use section* for acknowledgement \section*{Acknowledgment} -As a Ph.D. student, Ali Kadhum IDREES would like to gratefully acknowledge the University of Babylon - IRAQ for the financial support and in the same time would like to acknowledge Campus France (The French national agency for the promotion of higher education, international student services, and international mobility) and University of Franche-Comt\'e - FRANCE for all the support in FRANCE. +As a Ph.D. student, Ali Kadhum IDREES would like to gratefully acknowledge the +University of Babylon - Iraq for the financial support, Campus France (The +French national agency for the promotion of higher education, international +student services, and international mobility), and the University of +Franche-Comt\'e - France for all the support in France. %% \linenumbers -- 2.39.5