X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/Sensornets15.git/blobdiff_plain/8abf40a6144c2ae97b77ec718fef360e652d01b9..9465a19d403578b9a707d8eb3caf67fb44899dd2:/Example.tex?ds=inline diff --git a/Example.tex b/Example.tex index 38cc14f..a314c77 100644 --- a/Example.tex +++ b/Example.tex @@ -26,7 +26,8 @@ %\title{Authors' Instructions \subtitle{Preparation of Camera-Ready Contributions to SCITEPRESS Proceedings} } -\title{Distributed Lifetime Coverage Optimization Protocol in Wireless Sensor Networks} +\title{Distributed Lifetime Coverage Optimization Protocol \\ + in Wireless Sensor Networks} \author{Ali Kadhum Idrees$^{a,b}$, Karine Deschinkel$^{a}$,\\ Michel Salomon$^{a}$, and Rapha\"el Couturier$^{a}$\\ $^{a}$FEMTO-ST Institute, UMR 6174 CNRS, \\ University Bourgogne Franche-Comt\'e, Belfort, France\\ @@ -55,19 +56,8 @@ email: ali.idness@edu.univ-fcomte.fr,\\ $\lbrace$karine.deschinkel, michel.salom scheduling performed by each elected leader. This two-step process takes place periodically, in order to choose a small set of nodes remaining active for sensing during a time slot. Each set is built to ensure coverage at a low - energy cost, allowing to optimize the network lifetime. -%More precisely, a - %period consists of four phases: (i)~Information Exchange, (ii)~Leader - %Election, (iii)~Decision, and (iv)~Sensing. The decision process, which -% results in an activity scheduling vector, is carried out by a leader node -% through the solving of an integer program. -% MODIF - BEGIN - Simulations are conducted using the discret event simulator - OMNET++. We refer to the characterictics of a Medusa II sensor for - the energy consumption and the computation time. In comparison with - two other existing methods, our approach is able to increase the WSN - lifetime and provides improved coverage performance. } -% MODIF - END + energy cost, allowing to optimize the network lifetime. Simulations are conducted using the discrete event simulator OMNET++. We refer to the characterictics of a Medusa II sensor for the energy consumption and the computation time. In comparison with two other existing methods, our approach is able to increase the WSN lifetime and provides improved coverage performances. } + %\onecolumn @@ -78,37 +68,47 @@ email: ali.idness@edu.univ-fcomte.fr,\\ $\lbrace$karine.deschinkel, michel.salom \label{sec:introduction} \noindent -Energy efficiency is a crucial issue in wireless sensor networks since sensory +Energy efficiency is a crucial issue in wireless sensor networks since sensory consumption, in order to maximize the network lifetime, represents the major difficulty when designing WSNs. As a consequence, one of the scientific research challenges in WSNs, which has been addressed by a large amount of literature during the last few years, is the design of energy efficient approaches for -coverage and connectivity~\cite{conti2014mobile}. Coverage reflects how well a +coverage and connectivity~\cite{conti2014mobile}. Coverage reflects how well a sensor field is monitored. On the one hand we want to monitor the area of -interest in the most efficient way~\cite{Nayak04}. On the other hand we want to -use as little energy as possible. Sensor nodes are battery-powered with no -means of recharging or replacing, usually due to environmental (hostile or -unpractical environments) or cost reasons. Therefore, it is desired that the -WSNs are deployed with high densities so as to exploit the overlapping sensing -regions of some sensor nodes to save energy by turning off some of them during -the sensing phase to prolong the network lifetime. \textcolor{blue}{A WSN can use various types of sensors such as \cite{ref17,ref19}: thermal, seismic, magnetic, visual, infrared, acoustic, and radar. These sensors are capable of observing different physical conditions such as: temperature, humidity, pressure, speed, direction, movement, light, soil makeup, noise levels, presence or absence of certain kinds of objects, and mechanical stress levels on attached objects. Consequently, there is a wide range of WSN applications such as~\cite{ref22}: health-care, environment, agriculture, public safety, military, transportation systems, and industry applications.} - -In this paper we design a protocol that focuses on the area coverage problem +interest in the most efficient way~\cite{Nayak04}, which means + that we want to maintain the best coverage as long as possible. On the other +hand we want to use as little energy as possible. Sensor nodes are +battery-powered with no means of recharging or replacing, usually due to +environmental (hostile or unpractical environments) or cost reasons. Therefore, +it is desired that the WSNs are deployed with high densities so as to exploit +the overlapping sensing regions of some sensor nodes to save energy by turning +off some of them during the sensing phase to prolong the network +lifetime. A WSN can use various types of sensors such as + \cite{ref17,ref19}: thermal, seismic, magnetic, visual, infrared, acoustic, + and radar. These sensors are capable of observing different physical + conditions such as: temperature, humidity, pressure, speed, direction, + movement, light, soil makeup, noise levels, presence or absence of certain + kinds of objects, and mechanical stress levels on attached objects. + Consequently, there is a wide range of WSN applications such as~\cite{ref22}: + health-care, environment, agriculture, public safety, military, transportation + systems, and industry applications. + +In this paper we design a protocol that focuses on the area coverage problem with the objective of maximizing the network lifetime. Our proposition, the -Distributed Lifetime Coverage Optimization (DiLCO) protocol, maintains the +Distributed Lifetime Coverage Optimization (DiLCO) protocol, maintains the coverage and improves the lifetime in WSNs. The area of interest is first -divided into subregions using a divide-and-conquer algorithm and an activity +divided into subregions using a divide-and-conquer algorithm and an activity scheduling for sensor nodes is then planned by the elected leader in each subregion. In fact, the nodes in a subregion can be seen as a cluster where each -node sends sensing data to the cluster head or the sink node. Furthermore, the +node sends sensing data to the cluster head or the sink node. Furthermore, the activities in a subregion/cluster can continue even if another cluster stops due to too many node failures. Our DiLCO protocol considers periods, where a period -starts with a discovery phase to exchange information between sensors of the -same subregion, in order to choose in a suitable manner a sensor node (the +starts with a discovery phase to exchange information between sensors of the +same subregion, in order to choose in a suitable manner a sensor node (the leader) to carry out the coverage strategy. In each subregion the activation of -the sensors for the sensing phase of the current period is obtained by solving -an integer program. The resulting activation vector is broadcast by a leader -to every node of its subregion. +the sensors for the sensing phase of the current period is obtained by solving +an integer program. The resulting activation vector is broadcast by a leader to +every node of its subregion. % MODIF - BEGIN Our previous paper ~\cite{idrees2014coverage} relies almost exclusively on the @@ -116,16 +116,21 @@ framework of the DiLCO approach and the coverage problem formulation. In this paper we made more realistic simulations by taking into account the characteristics of a Medusa II sensor ~\cite{raghunathan2002energy} to measure the energy consumption and the computation time. We have implemented two other -existing \textcolor{blue}{and distributed approaches}(DESK ~\cite{ChinhVu}, and GAF ~\cite{xu2001geography}) in order to compare their performances -with our approach. We also focus on performance analysis based on the number of -subregions. +existing and distributed approaches (DESK ~\cite{ChinhVu}, and +GAF ~\cite{xu2001geography}) in order to compare their performances with our +approach. We focused on DESK and GAF protocols for two reasons. + First our protocol is inspired by both of them: DiLCO uses a regular division + of the area of interest as in GAF and a temporal division in rounds as in + DESK. Second, DESK and GAF are well-known protocols, easy to implement, and + often used as references for comparison. We also focus on performance +analysis based on the number of subregions. % MODIF - END The remainder of the paper continues with Section~\ref{sec:Literature Review} where a review of some related works is presented. The next section describes -the DiLCO protocol, followed in Section~\ref{cp} by the coverage model +the DiLCO protocol, followed in Section~\ref{cp} by the coverage model formulation which is used to schedule the activation of -sensors. Section~\ref{sec:Simulation Results and Analysis} shows the simulation +sensors. Section~\ref{sec:Simulation Results and Analysis} shows the simulation results. The paper ends with a conclusion and some suggestions for further work in Section~\ref{sec:Conclusion and Future Works}. @@ -158,7 +163,7 @@ and the set of active sensor nodes is decided at the beginning of each period requirements (e.g. area monitoring, connectivity, power efficiency). For instance, Jaggi et al. \cite{jaggi2006} address the problem of maximizing network lifetime by dividing sensors into the maximum number of disjoint subsets -such that each subset can ensure both coverage and connectivity. A greedy +so that each subset can ensure both coverage and connectivity. A greedy algorithm is applied once to solve this problem and the computed sets are activated in succession to achieve the desired network lifetime. Vu \cite{chin2007}, Padmatvathy et al. \cite{pc10}, propose algorithms working in a @@ -184,6 +189,25 @@ of information can be huge. {\it In order to be suitable for large-scale smaller subregions, and in each one, a node called the leader is in charge for selecting the active sensors for the current period.} +% MODIF - BEGIN + Our approach to select the leader node in a subregion is quite + different from cluster head selection methods used in LEACH + \cite{DBLP:conf/hicss/HeinzelmanCB00} or its variants + \cite{ijcses11}. Contrary to LEACH, the division of the area of interest is + supposed to be performed before the leader election. Moreover, we assume that + the sensors are deployed almost uniformly and with high density over the area + of interest, so that the division is fixed and regular. As in LEACH, our + protocol works in round fashion. In each round, during the pre-sensing phase, + nodes make autonomous decisions. In LEACH, each sensor elects itself to be a + cluster head, and each non-cluster head will determine its cluster for the + round. In our protocol, nodes in the same subregion select their leader. In + both protocols, the amount of remaining energy in each node is taken into + account to promote the nodes that have the most energy to become leader. + Contrary to the LEACH protocol where all sensors will be active during the + sensing-phase, our protocol allows to deactivate a subset of sensors through + an optimization process which significantly reduces the energy consumption. +% MODIF - END + A large variety of coverage scheduling algorithms has been developed. Many of the existing algorithms, dealing with the maximization of the number of cover sets, are heuristics. These heuristics involve the construction of a cover set @@ -238,13 +262,13 @@ corresponding to a sensor node is covered by its neighboring nodes if all its primary points are covered. Obviously, the approximation of coverage is more or less accurate according to the number of primary points. - \subsection{Main idea} \label{main_idea} \noindent We start by applying a divide-and-conquer algorithm to partition the area of interest into smaller areas called subregions and then our protocol is -executed simultaneously in each subregion. \textcolor{blue}{Sensor nodes are assumed to -be deployed almost uniformly over the region and the subdivision of the area of interest is regular.} +executed simultaneously in each subregion. Sensor nodes are + assumed to be deployed almost uniformly over the region and the subdivision of + the area of interest is regular. \begin{figure}[ht!] \centering @@ -290,23 +314,37 @@ and each sensor node will have five possible status in the network: \end{itemize} %\end{enumerate} -An outline of the protocol implementation is given by Algorithm~\ref{alg:DiLCO} +An outline of the protocol implementation is given by Algorithm~\ref{alg:DiLCO} which describes the execution of a period by a node (denoted by $s_j$ for a -sensor node indexed by $j$). At the beginning a node checks whether it has -enough energy \textcolor{blue}{(its energy should be greater than a fixed treshold $E_{th}$)} to stay active during the next sensing phase. If yes, it exchanges -information with all the other nodes belonging to the same subregion: it -collects from each node its position coordinates, remaining energy ($RE_j$), ID, -and the number of one-hop neighbors still alive. \textcolor{blue}{INFO packet contains two parts: header and data payload. The sensor ID is included in the header, where the header size is 8 bits. The data part includes position coordinates (64 bits), remaining energy (32 bits), and the number of one-hop live neighbors (8 bits). Therefore the size of the INFO packet is 112 bits.} Once the first phase is -completed, the nodes of a subregion choose a leader to take the decision based -on the following criteria with decreasing importance: larger number of -neighbors, larger remaining energy, and then in case of equality, larger index. -After that, if the sensor node is leader, it will execute the integer program -algorithm (see Section~\ref{cp}) which provides a set of sensors planned to be -active in the next sensing phase. As leader, it will send an Active-Sleep packet -to each sensor in the same subregion to indicate it if it has to be active or -not. Alternately, if the sensor is not the leader, it will wait for the -Active-Sleep packet to know its state for the coming sensing phase. - +sensor node indexed by $j$). At the beginning a node checks whether it has +enough energy (its energy should be greater than a fixed + treshold $E_{th}$) to stay active during the next sensing phase. If yes, it +exchanges information with all the other nodes belonging to the same subregion: +it collects from each node its position coordinates, remaining energy ($RE_j$), +ID, and the number of one-hop neighbors still alive. INFO + packet contains two parts: header and payload data. The sensor ID is included + in the header, where the header size is 8 bits. The data part includes + position coordinates (64 bits), remaining energy (32 bits), and the number of + one-hop live neighbors (8 bits). Therefore the size of the INFO packet is 112 + bits. Once the first phase is completed, the nodes of a subregion choose a +leader to take the decision based on the following criteria with decreasing +importance: larger number of neighbors, larger remaining energy, and then in +case of equality, larger index. After that, if the sensor node is leader, it +will solve an integer program (see Section~\ref{cp}). This + integer program contains boolean variables $X_j$ where ($X_j=1$) means that + sensor $j$ will be active in the next sensing phase. Only sensors with enough + remaining energy are involved in the integer program ($J$ is the set of all + sensors involved). As the leader consumes energy (computation energy is + denoted by $E^{comp}$) to solve the optimization problem, it will be included + in the integer program only if it has enough energy to achieve the computation + and to stay alive during the next sensing phase, that is to say if $RE_j > + E^{comp}+E_{th}$. Once the optimization problem is solved, each leader will + send an ActiveSleep packet to each sensor in the same subregion to indicate it + if it has to be active or not. Otherwise, if the sensor is not the leader, it + will wait for the ActiveSleep packet to know its state for the coming sensing + phase. +%which provides a set of sensors planned to be +%active in the next sensing phase. \begin{algorithm}[h!] @@ -354,7 +392,7 @@ We formulate the coverage optimization problem with an integer program. The objective function consists in minimizing the undercoverage and the overcoverage of the area as suggested in \cite{pedraza2006}. The area coverage problem is expressed as the coverage of a fraction of points called primary points. Details on the choice and the number of primary points can be found in \cite{idrees2014coverage}. The set of primary points is denoted by $P$ -and the set of sensors by $J$. As we consider a boolean disk coverage model, we use the boolean indicator $\alpha_{jp}$ which is equal to 1 if the primary point $p$ is in the sensing range of the sensor $j$. The binary variable $X_j$ represents the activation or not of the sensor $j$. So we can express the number of active sensors that cover the primary point $p$ by $\sum_{j \in J} \alpha_{jp} * X_{j}$. We deduce the overcoverage denoted by $\Theta_p$ of the primary point $p$ : +and the set of alive sensors by $J$. As we consider a boolean disk coverage model, we use the boolean indicator $\alpha_{jp}$ which is equal to 1 if the primary point $p$ is in the sensing range of the sensor $j$. The binary variable $X_j$ represents the activation or not of the sensor $j$. So we can express the number of active sensors that cover the primary point $p$ by $\sum_{j \in J} \alpha_{jp} * X_{j}$. We deduce the overcoverage denoted by $\Theta_p$ of the primary point $p$ : \begin{equation} \Theta_{p} = \left \{ \begin{array}{l l} @@ -398,17 +436,16 @@ X_{j} \in \{0,1\}, &\forall j \in J \end{array} \right. \end{equation} -The objective function is a weighted sum of overcoverage and undercoverage. The goal is to limit the overcoverage in order to activate a minimal number of sensors while simultaneously preventing undercoverage. Both weights $w_\theta$ and $w_U$ must be carefully chosen in -order to guarantee that the maximum number of points are covered during each -period. +The objective function is a weighted sum of overcoverage and undercoverage. The goal is to limit the overcoverage in order to activate a minimal number of sensors while simultaneously preventing undercoverage. By + choosing $w_{U}$ much larger than $w_{\theta}$, the coverage of a + maximum of primary points is ensured. Then for the same number of covered + primary points, the solution with a minimal number of active sensors is + preferred. +%Both weights $w_\theta$ and $w_U$ must be carefully chosen in +%order to guarantee that the maximum number of points are covered during each +%period. % MODIF - END - - - - - - \iffalse \indent Our model is based on the model proposed by \cite{pedraza2006} where the @@ -714,7 +751,7 @@ nodes, and thus enables the extension of the network lifetime. \parskip 0pt \begin{figure}[t!] \centering - \includegraphics[scale=0.45] {CR.pdf} + \includegraphics[scale=0.475] {CR.pdf} \caption{Coverage ratio} \label{fig3} \end{figure} @@ -735,7 +772,7 @@ used for the different performance metrics. \begin{figure}[h!] \centering -\includegraphics[scale=0.45]{EC.pdf} +\includegraphics[scale=0.475]{EC.pdf} \caption{Energy consumption per period} \label{fig95} \end{figure} @@ -771,20 +808,20 @@ Figure~\ref{fig8}. \begin{figure}[h!] \centering -\includegraphics[scale=0.45]{T.pdf} +\includegraphics[scale=0.475]{T.pdf} \caption{Execution time in seconds} \label{fig8} \end{figure} Figure~\ref{fig8} shows that DiLCO-32 has very low execution times in comparison -with other DiLCO versions, because the activity scheduling is tackled by a +with other DiLCO versions, because the activity scheduling is tackled by a larger number of leaders and each leader solves an integer problem with a limited number of variables and constraints. Conversely, DiLCO-2 requires to solve an optimization problem with half of the network nodes and thus presents a high execution time. Nevertheless if we refer to Figure~\ref{fig3}, we observe that DiLCO-32 is slightly less efficient than DilCO-16 to maintain as long as -possible high coverage. In fact an excessive subdivision of the area of interest -prevents it to ensure a good coverage especially on the borders of the +possible high coverage. In fact an excessive subdivision of the area of interest +prevents it to ensure a good coverage especially on the borders of the subregions. Thus, the optimal number of subregions can be seen as a trade-off between execution time and coverage performance. @@ -798,7 +835,7 @@ network lifetime. \begin{figure}[h!] \centering -\includegraphics[scale=0.45]{LT.pdf} +\includegraphics[scale=0.475]{LT.pdf} \caption{Network lifetime} \label{figLT95} \end{figure} @@ -806,37 +843,38 @@ network lifetime. As highlighted by Figure~\ref{figLT95}, when the coverage level is relaxed ($50\%$) the network lifetime also improves. This observation reflects the fact that the higher the coverage performance, the more nodes must be active to -ensure the wider monitoring. For a similar level of coverage, DiLCO outperforms +ensure the wider monitoring. For a similar level of coverage, DiLCO outperforms DESK and GAF for the lifetime of the network. More specifically, if we focus on -the larger level of coverage ($95\%$) in the case of our protocol, the subdivision -in $16$~subregions seems to be the most appropriate. +the larger level of coverage ($95\%$) in the case of our protocol, the +subdivision in $16$~subregions seems to be the most appropriate. \section{\uppercase{Conclusion and future work}} \label{sec:Conclusion and Future Works} -A crucial problem in WSN is to schedule the sensing activities of the different -nodes in order to ensure both coverage of the area of interest and longer +A crucial problem in WSN is to schedule the sensing activities of the different +nodes in order to ensure both coverage of the area of interest and longer network lifetime. The inherent limitations of sensor nodes, in energy provision, -communication and computing capacities, require protocols that optimize the use -of the available resources to fulfill the sensing task. To address this -problem, this paper proposes a two-step approach. Firstly, the field of sensing +communication and computing capacities, require protocols that optimize the use +of the available resources to fulfill the sensing task. To address this +problem, this paper proposes a two-step approach. Firstly, the field of sensing is divided into smaller subregions using the concept of divide-and-conquer method. Secondly, a distributed protocol called Distributed Lifetime Coverage -Optimization is applied in each subregion to optimize the coverage and lifetime -performances. In a subregion, our protocol consists in electing a leader node +Optimization is applied in each subregion to optimize the coverage and lifetime +performances. In a subregion, our protocol consists in electing a leader node which will then perform a sensor activity scheduling. The challenges include how -to select the most efficient leader in each subregion and the best +to select the most efficient leader in each subregion and the best representative set of active nodes to ensure a high level of coverage. To assess the performance of our approach, we compared it with two other approaches using -many performance metrics like coverage ratio or network lifetime. We have also -studied the impact of the number of subregions chosen to subdivide the area of +many performance metrics like coverage ratio or network lifetime. We have also +studied the impact of the number of subregions chosen to subdivide the area of interest, considering different network sizes. The experiments show that -increasing the number of subregions improves the lifetime. The more subregions there are, the more robust the network is against random disconnection -resulting from dead nodes. However, for a given sensing field and network size -there is an optimal number of subregions. Therefore, in case of our simulation -context a subdivision in $16$~subregions seems to be the most relevant. The -optimal number of subregions will be investigated in the future. +increasing the number of subregions improves the lifetime. The more subregions +there are, the more robust the network is against random disconnection resulting +from dead nodes. However, for a given sensing field and network size there is +an optimal number of subregions. Therefore, in case of our simulation context a +subdivision in $16$~subregions seems to be the most relevant. The optimal number +of subregions will be investigated in the future. \section*{\uppercase{Acknowledgements}} @@ -848,7 +886,7 @@ the Labex ACTION program (contract ANR-11-LABX-01-01). %\vfill \bibliographystyle{plain} {\small -\bibliography{Example}} +\bibliography{biblio}} %\vfill \end{document}