X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/JournalMultiPeriods.git/blobdiff_plain/8029b7a5810c94a0bf1f2e0b4b37432a875a5015..851c4bad39e11ad78c3f12eb0b7324fcaa29f60b:/article.tex?ds=sidebyside diff --git a/article.tex b/article.tex index 99b5e0e..329426f 100644 --- a/article.tex +++ b/article.tex @@ -41,7 +41,7 @@ %% for the whole article with \linenumbers. %% \usepackage{lineno} -\journal{Ad Hoc Networks} +\journal{Journal of Supercomputing} \begin{document} @@ -84,46 +84,36 @@ %e-mail: ali.idness@edu.univ-fcomte.fr, \\ %$\lbrace$karine.deschinkel, michel.salomon, raphael.couturier$\rbrace$@univ-fcomte.fr.} - \author{Ali Kadhum Idrees$^{a,b}$, Karine Deschinkel$^{a}$, \\ -Michel Salomon$^{a}$ and Rapha\"el Couturier $^{a}$ \\ - $^{a}${\em{FEMTO-ST Institute, UMR 6174 CNRS, \\ - University Bourgogne Franche-Comt\'e, Belfort, France}} \\ - $^{b}${\em{Department of Computer Science, University of Babylon, Babylon, Iraq}} -} - + Michel Salomon$^{a}$, and Rapha\"el Couturier $^{a}$ \\ + $^{a}${\em{FEMTO-ST Institute, DISC department, UMR 6174 CNRS, \\ + Univ. Bourgogne Franche-Comt\'e (UBFC), Belfort, France}} \\ + $^{b}${\em{Department of Computer Science, University of Babylon, Babylon, Iraq}}} \begin{abstract} -%One of the fundamental challenges in Wireless Sensor Networks (WSNs) -%is the coverage preservation and the extension of the network lifetime -%continuously and effectively when monitoring a certain area (or -%region) of interest. Coverage and lifetime are two paramount problems in Wireless Sensor Networks -(WSNs). In this paper, a method called Multiround Distributed Lifetime Coverage +(WSNs). In this paper, a method called Multiround Distributed Lifetime Coverage Optimization protocol (MuDiLCO) is proposed to maintain the coverage and to improve the lifetime in wireless sensor networks. The area of interest is first -divided into subregions and then the MuDiLCO protocol is distributed on the -sensor nodes in each subregion. The proposed MuDiLCO protocol works in periods -during which sets of sensor nodes are scheduled to remain active for a number of -rounds during the sensing phase, to ensure coverage so as to maximize the -lifetime of WSN. \textcolor{green}{The decision process is carried out by a leader node, which -solves an optimization problem to produce the best representative sets to be used -during the rounds of the sensing phase. The optimization problem formulated as an integer program is solved to optimality through a branch-and-Bound method for small instances. For larger instances, the best feasible solution found by the solver after a given time limit threshold is considered. } -%The decision process is carried out by a leader node, which -%solves an integer program to produce the best representative sets to be used -%during the rounds of the sensing phase. -%\textcolor{red}{The integer program is solved by either GLPK solver or Genetic Algorithm (GA)}. -Compared with some existing protocols, -simulation results based on multiple criteria (energy consumption, coverage -ratio, and so on) show that the proposed protocol can prolong efficiently the -network lifetime and improve the coverage performance. - +divided into subregions and then the MuDiLCO protocol is distributed on the +sensor nodes in each subregion. The proposed MuDiLCO protocol works in periods +during which sets of sensor nodes are scheduled, with one set for each round of +a period, to remain active during the sensing phase and thus ensure coverage so +as to maximize the WSN lifetime. The decision process is carried out by a +leader node, which solves an optimization problem to produce the best +representative sets to be used during the rounds of the sensing phase. The +optimization problem formulated as an integer program is solved to optimality +through a Branch-and-Bound method for small instances. For larger instances, +the best feasible solution found by the solver after a given time limit +threshold is considered. Compared with some existing protocols, simulation +results based on multiple criteria (energy consumption, coverage ratio, and so +on) show that the proposed protocol can prolong efficiently the network lifetime +and improve the coverage performance. \end{abstract} \begin{keyword} Wireless Sensor Networks, Area Coverage, Network Lifetime, Optimization, Scheduling, Distributed Computation. - \end{keyword} \end{frontmatter} @@ -151,32 +141,36 @@ regions to turn-off redundant sensor nodes and thus save energy. In this paper, we concentrate on the area coverage problem, with the objective of maximizing the network lifetime by using an optimized multiround scheduling. -% One of the major scientific research challenges in WSNs, which are addressed by a large number of literature during the last few years is to design energy efficient approaches for coverage and connectivity in WSNs~\cite{conti2014mobile}. The coverage problem is one of the -%fundamental challenges in WSNs~\cite{Nayak04} that consists in monitoring efficiently and continuously -%the area of interest. The limited energy of sensors represents the main challenge in the WSNs -%design~\cite{Sudip03}, where it is difficult to replace and/or recharge their batteries because the the area of interest nature (such as hostile environments) and the cost. So, it is necessary that a WSN -%deployed with high density because spatial redundancy can then be exploited to increase the lifetime of the network. However, turn on all the sensor nodes, which monitor the same region at the same time -%leads to decrease the lifetime of the network. To extend the lifetime of the network, the main idea is to take advantage of the overlapping sensing regions of some sensor nodes to save energy by turning off -%some of them during the sensing phase~\cite{Misra05}. WSNs require energy-efficient solutions to improve the network lifetime that is constrained by the limited power of each sensor node ~\cite{Akyildiz02}. - -%In this paper, we concentrate on the area coverage problem, with the objective -%of maximizing the network lifetime by using an optimized multirounds scheduling. -%The area of interest is divided into subregions. - -% Each period includes four phases starts with a discovery phase to exchange information among the sensors of the subregion, in order to choose in a suitable manner a sensor node as leader to carry out a coverage strategy. This coverage strategy involves the solving of an integer program by the leader, to optimize the coverage and the lifetime in the subregion by producing a sets of sensor nodes in order to take the mission of coverage preservation during several rounds in the sensing phase. In fact, the nodes in a subregion can be seen as a cluster where each node sends sensing data to the cluster head or the sink node. Furthermore, the activities in a subregion/cluster can continue even if another cluster stops due to too many node failures. - -The remainder of the paper is organized as follows. The next section -% Section~\ref{rw} -reviews the related works in the field. Section~\ref{pd} is devoted to the -description of MuDiLCO protocol. Section~\ref{exp} shows the simulation results -obtained using the discrete event simulator OMNeT++ \cite{varga}. They fully -demonstrate the usefulness of the proposed approach. Finally, we give -concluding remarks and some suggestions for future works in -Section~\ref{sec:conclusion}. - - -%%RC : Related works good for a phd thesis but too long for a paper. Ali you need to learn to .... summarize :-) -\section{Related works} % Trop proche de l'etat de l'art de l'article de Zorbas ? +The MuDiLCO protocol (for Multiround Distributed Lifetime Coverage Optimization +protocol) presented in this paper is an extension of the approach introduced +in~\cite{idrees2015distributed}. +% In~\cite{idrees2015distributed}, the protocol is +%deployed over only two subregions. Simulation results have shown that it was +%more interesting to divide the area into several subregions, given the +%computation complexity. + +\textcolor{blue}{ Compared to our previous work~\cite{idrees2015distributed}, + in this paper we study the possibility of dividing the sensing phase into + multiple rounds. We make a multiround optimization, + while previously it was a single round optimization. The idea is to + take advantage of the pre-sensing phase to plan the sensor's activity for + several rounds instead of one, thus saving energy. In addition, when the + optimization problem becomes more complex, its resolution is stopped after a + given time threshold. In this paper we also analyze the performance of our + protocol according to the number of primary points used (the area coverage is + replaced by the coverage of a set of particular points called primary points, + see Section~\ref{pp}).} + +The remainder of the paper is organized as follows. The next section reviews the +related works in the field. Section~\ref{pd} is devoted to the description of +MuDiLCO protocol. Section~\ref{exp} introduces the experimental framework, it +describes the simulation setup and the different metrics used to assess the +performances. Section~\ref{analysis} shows the simulation results obtained +using the discrete event simulator OMNeT++ \cite{varga}. They fully demonstrate +the usefulness of the proposed approach. Finally, we give concluding remarks +and some suggestions for future works in Section~\ref{sec:conclusion}. + +\section{Related works} \label{rw} \indent This section is dedicated to the various approaches proposed in the @@ -197,53 +191,52 @@ algorithms in WSNs according to several design choices: The choice of non-disjoint or disjoint cover sets (sensors participate or not in many cover sets) can be added to the above list. -% The independency in the cover set (i.e. whether the cover sets are disjoint or non-disjoint) \cite{zorbas2010solving} is another design choice that can be added to the above list. \subsection{Centralized approaches} The major approach is to divide/organize the sensors into a suitable number of cover sets where each set completely covers an interest region and to activate these cover sets successively. The centralized algorithms always provide nearly -or close to optimal solution since the algorithm has global view of the whole +or close to optimal solution since the algorithm has global view of the whole network. Note that centralized algorithms have the advantage of requiring very low processing power from the sensor nodes, which usually have limited -processing capabilities. The main drawback of this kind of approach is its +processing capabilities. The main drawback of this kind of approach is its higher cost in communications, since the node that will make the decision needs -information from all the sensor nodes. \textcolor{green} {Exact or heuristics approaches are designed to provide cover sets. - %(Moreover, centralized approaches usually -%suffer from the scalability problem, making them less competitive as the network -%size increases.) -Contrary to exact methods, heuristic methods can handle very large and centralized problems. They are proposed to reduce computational overhead such as energy consumption, delay and generally increase in -the network lifetime. } +information from all the sensor nodes. Exact or heuristic + approaches are designed to provide cover sets. Contrary to exact methods, + heuristic ones can handle very large and centralized problems. They are + proposed to reduce computational overhead such as energy consumption, delay, + and generally allow to increase the network lifetime. The first algorithms proposed in the literature consider that the cover sets are disjoint: a sensor node appears in exactly one of the generated cover -sets~\cite{abrams2004set,cardei2005improving,Slijepcevic01powerefficient}. In -the case of non-disjoint algorithms \cite{pujari2011high}, sensors may -participate in more than one cover set. In some cases, this may prolong the +sets~\cite{abrams2004set,cardei2005improving,Slijepcevic01powerefficient}. In +the case of non-disjoint algorithms \cite{pujari2011high}, sensors may +participate in more than one cover set. In some cases, this may prolong the lifetime of the network in comparison to the disjoint cover set algorithms, but -designing algorithms for non-disjoint cover sets generally induces a higher +designing algorithms for non-disjoint cover sets generally induces a higher order of complexity. Moreover, in case of a sensor's failure, non-disjoint -scheduling policies are less resilient and reliable because a sensor may be +scheduling policies are less resilient and reliable because a sensor may be involved in more than one cover sets. -%For instance, the proposed work in ~\cite{cardei2005energy, berman04} -In~\cite{yang2014maximum}, the authors have considered a linear programming +In~\cite{yang2014maximum}, the authors have considered a linear programming approach to select the minimum number of working sensor nodes, in order to -preserve a maximum coverage and to extend lifetime of the network. Cheng et +preserve a maximum coverage and to extend lifetime of the network. Cheng et al.~\cite{cheng2014energy} have defined a heuristic algorithm called Cover Sets Balance (CSB), which chooses a set of active nodes using the tuple (data coverage range, residual energy). Then, they have introduced a new Correlated -Node Set Computing (CNSC) algorithm to find the correlated node set for a given -node. After that, they proposed a High Residual Energy First (HREF) node -selection algorithm to minimize the number of active nodes so as to prolong the -network lifetime. Various centralized methods based on column generation -approaches have also been -proposed~\cite{gentili2013,castano2013column,rossi2012exact,deschinkel2012column}. -\textcolor{green}{In~\cite{gentili2013}, authors highlight the trade-off between the network lifetime and the coverage percentage. They show that network lifetime can be hugely improved by decreasing the coverage ratio. } +Node Set Computing (CNSC) algorithm to find the correlated node set for a given +node. After that, they proposed a High Residual Energy First (HREF) node +selection algorithm to minimize the number of active nodes so as to prolong the +network lifetime. Various centralized methods based on column generation +approaches have also been +proposed~\cite{gentili2013,castano2013column,rossi2012exact,deschinkel2012column}. +In~\cite{gentili2013}, authors highlight the trade-off between + the network lifetime and the coverage percentage. They show that network + lifetime can be hugely improved by decreasing the coverage ratio. \subsection{Distributed approaches} -%{\bf Distributed approaches} + In distributed and localized coverage algorithms, the required computation to schedule the activity of sensor nodes will be done by the cooperation among neighboring nodes. These algorithms may require more computation power for the @@ -251,44 +244,40 @@ processing by the cooperating sensor nodes, but they are more scalable for large WSNs. Localized and distributed algorithms generally result in non-disjoint set covers. -Many distributed algorithms have been developed to perform the scheduling so as -to preserve coverage, see for example -\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02, yardibi2010distributed, - prasad2007distributed,Misra}. Distributed algorithms typically operate in +Many distributed algorithms have been developed to perform the scheduling so as +to preserve coverage, see for example +\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02, yardibi2010distributed, + prasad2007distributed,Misra}. Distributed algorithms typically operate in rounds for a predetermined duration. At the beginning of each round, a sensor -exchanges information with its neighbors and makes a decision to either remain +exchanges information with its neighbors and makes a decision to either remain turned on or to go to sleep for the round. This decision is basically made on -simple greedy criteria like the largest uncovered area +simple greedy criteria like the largest uncovered area \cite{Berman05efficientenergy} or maximum uncovered targets \cite{lu2003coverage}. The Distributed Adaptive Sleep Scheduling Algorithm -(DASSA) \cite{yardibi2010distributed} does not require location information of +(DASSA) \cite{yardibi2010distributed} does not require location information of sensors while maintaining connectivity and satisfying a user defined coverage target. In DASSA, nodes use the residual energy levels and feedback from the sink for scheduling the activity of their neighbors. This feedback mechanism -reduces the randomness in scheduling that would otherwise occur due to the -absence of location information. In \cite{ChinhVu}, the author have designed a -novel distributed heuristic, called Distributed Energy-efficient Scheduling for +reduces the randomness in scheduling that would otherwise occur due to the +absence of location information. In \cite{ChinhVu}, the authors have designed a +novel distributed heuristic, called Distributed Energy-efficient Scheduling for k-coverage (DESK), which ensures that the energy consumption among the sensors is balanced and the lifetime maximized while the coverage requirement is -maintained. This heuristic works in rounds, requires only one-hop neighbor +maintained. This heuristic works in rounds, requires only one-hop neighbor information, and each sensor decides its status (active or sleep) based on the perimeter coverage model from~\cite{Huang:2003:CPW:941350.941367}. -%Our Work, which is presented in~\cite{idrees2014coverage} proposed a coverage optimization protocol to improve the lifetime in -%heterogeneous energy wireless sensor networks. -%In this work, the coverage protocol distributed in each sensor node in the subregion but the optimization take place over the the whole subregion. We consider only distributing the coverage protocol over two subregions. - The works presented in \cite{Bang, Zhixin, Zhang} focus on coverage-aware, distributed energy-efficient, and distributed clustering methods respectively, -which aim at extending the network lifetime, while the coverage is ensured. +which aim at extending the network lifetime, while the coverage is ensured. More recently, Shibo et al. \cite{Shibo} have expressed the coverage problem as a minimum weight submodular set cover problem and proposed a Distributed -Truncated Greedy Algorithm (DTGA) to solve it. They take advantage from both +Truncated Greedy Algorithm (DTGA) to solve it. They take advantage from both temporal and spatial correlations between data sensed by different sensors, and -leverage prediction, to improve the lifetime. In \cite{xu2001geography}, Xu et -al. have described an algorithm, called Geographical Adaptive Fidelity (GAF), -which uses geographic location information to divide the area of interest into -fixed square grids. Within each grid, it keeps only one node staying awake to +leverage prediction, to improve the lifetime. In \cite{xu2001geography}, Xu et +al. have described an algorithm, called Geographical Adaptive Fidelity (GAF), +which uses geographic location information to divide the area of interest into +fixed square grids. Within each grid, it keeps only one node staying awake to take the responsibility of sensing and communication. Some other approaches (outside the scope of our work) do not consider a @@ -296,264 +285,43 @@ synchronized and predetermined time-slot where the sensors are active or not. Indeed, each sensor maintains its own timer and its wake-up time is randomized \cite{Ye03} or regulated \cite{cardei2005maximum} over time. -The MuDiLCO protocol (for Multiround Distributed Lifetime Coverage Optimization -protocol) presented in this paper is an extension of the approach introduced -in~\cite{idrees2014coverage}. In~\cite{idrees2014coverage}, the protocol is -deployed over only two subregions. Simulation results have shown that it was -more interesting to divide the area into several subregions, given the -computation complexity. Compared to our previous paper, in this one we study the -possibility of dividing the sensing phase into multiple rounds and we also add -an improved model of energy consumption to assess the efficiency of our -approach. In fact, in this paper we make a multiround optimization, while it was -a single round optimization in our previous work. \textcolor{green}{The idea is to take advantage of the pre-sensing phase - to plan the sensor's activity for several rounds instead of one, thus saving energy. In addition, when the optimization problem becomes more complex, its resolution is stopped after a given time threshold}. - -\iffalse - -\subsection{Centralized Approaches} -%{\bf Centralized approaches} -The major approach is to divide/organize the sensors into a suitable number of -set covers where each set completely covers an interest region and to activate -these set covers successively. The centralized algorithms always provide nearly -or close to optimal solution since the algorithm has global view of the whole -network. Note that centralized algorithms have the advantage of requiring very -low processing power from the sensor nodes, which usually have limited -processing capabilities. The main drawback of this kind of approach is its -higher cost in communications, since the node that will take the decision needs -information from all the sensor nodes. Moreover, centralized approaches usually -suffer from the scalability problem, making them less competitive as the network -size increases. - -The first algorithms proposed in the literature consider that the cover sets are -disjoint: a sensor node appears in exactly one of the generated cover sets. For -instance, Slijepcevic and Potkonjak \cite{Slijepcevic01powerefficient} have -proposed an algorithm, which allocates sensor nodes in mutually independent sets -to monitor an area divided into several fields. Their algorithm builds a cover -set by including in priority the sensor nodes which cover critical fields, that -is to say fields that are covered by the smallest number of sensors. The time -complexity of their heuristic is $O(n^2)$ where $n$ is the number of sensors. -Abrams et al.~\cite{abrams2004set} have designed three approximation algorithms -for a variation of the set k-cover problem, where the objective is to partition -the sensors into covers such that the number of covers that include an area, -summed over all areas, is maximized. Their work builds upon previous work -in~\cite{Slijepcevic01powerefficient} and the generated cover sets do not -provide complete coverage of the monitoring zone. - -In \cite{cardei2005improving}, the authors have proposed a method to efficiently -compute the maximum number of disjoint set covers such that each set can monitor -all targets. They first transform the problem into a maximum flow problem, which -is formulated as a mixed integer programming (MIP). Then their heuristic uses -the output of the MIP to compute disjoint set covers. Results show that this -heuristic provides a number of set covers slightly larger compared to -\cite{Slijepcevic01powerefficient}, but with a larger execution time due to the -complexity of the mixed integer programming resolution. - -Zorbas et al. \cite{zorbas2010solving} presented a centralized greedy algorithm -for the efficient production of both node disjoint and non-disjoint cover sets. -Compared to algorithm's results of Slijepcevic and Potkonjak -\cite{Slijepcevic01powerefficient}, their heuristic produces more disjoint cover -sets with a slight growth rate in execution time. When producing non-disjoint -cover sets, both Static-CCF and Dynamic-CCF algorithms, where CCF means that -they use a cost function called Critical Control Factor, provide cover sets -offering longer network lifetime than those produced by \cite{cardei2005energy}. -Also, they require a smaller number of participating nodes in order to achieve -these results. - -In the case of non-disjoint algorithms \cite{pujari2011high}, sensors may -participate in more than one cover set. In some cases, this may prolong the -lifetime of the network in comparison to the disjoint cover set algorithms, but -designing algorithms for non-disjoint cover sets generally induces a higher -order of complexity. Moreover, in case of a sensor's failure, non-disjoint -scheduling policies are less resilient and less reliable because a sensor may be -involved in more than one cover sets. For instance, Cardei et -al.~\cite{cardei2005energy} present a linear programming (LP) solution and a -greedy approach to extend the sensor network lifetime by organizing the sensors -into a maximal number of non-disjoint cover sets. Simulation results show that -by allowing sensors to participate in multiple sets, the network lifetime -increases compared with related work~\cite{cardei2005improving}. -In~\cite{berman04}, the authors have formulated the lifetime problem and -suggested another (LP) technique to solve this problem. A centralized solution -based on the Garg-K\"{o}nemann algorithm~\cite{garg98}, provably near the -optimal solution, is also proposed. - -In~\cite{yang2014maximum}, the authors have proposed a linear programming -approach for selecting the minimum number of working sensor nodes, in order to -as to preserve a maximum coverage and extend lifetime of the network. Cheng et -al.~\cite{cheng2014energy} have defined a heuristic algorithm called Cover Sets -Balance (CSB), which choose a set of active nodes using the tuple (data coverage -range, residual energy). Then, they have introduced a new Correlated Node Set -Computing (CNSC) algorithm to find the correlated node set for a given node. -After that, they proposed a High Residual Energy First (HREF) node selection -algorithm to minimize the number of active nodes so as to prolong the network -lifetime. Various centralized methods based on column generation approaches have -also been proposed~\cite{castano2013column,rossi2012exact,deschinkel2012column}. - -\subsection{Distributed approaches} -%{\bf Distributed approaches} -In distributed and localized coverage algorithms, the required computation to -schedule the activity of sensor nodes will be done by the cooperation among -neighboring nodes. These algorithms may require more computation power for the -processing by the cooperating sensor nodes, but they are more scalable for large -WSNs. Localized and distributed algorithms generally result in non-disjoint set -covers. - -Many distributed algorithms have been developed to perform the scheduling so as -to preserve coverage, see for example -\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02,yardibi2010distributed}. -Distributed algorithms typically operate in rounds for a predetermined -duration. At the beginning of each round, a sensor exchanges information with -its neighbors and makes a decision to either remain turned on or to go to sleep -for the round. This decision is basically made on simple greedy criteria like -the largest uncovered area \cite{Berman05efficientenergy} or maximum uncovered -targets \cite{lu2003coverage}. In \cite{Tian02}, the scheduling scheme is -divided into rounds, where each round has a self-scheduling phase followed by a -sensing phase. Each sensor broadcasts a message containing the node~ID and the -node location to its neighbors at the beginning of each round. A sensor -determines its status by a rule named off-duty eligible rule, which tells him to -turn off if its sensing area is covered by its neighbors. A back-off scheme is -introduced to let each sensor delay the decision process with a random period of -time, in order to avoid simultaneous conflicting decisions between nodes and -lack of coverage on any area. In \cite{prasad2007distributed} a model for -capturing the dependencies between different cover sets is defined and it -proposes localized heuristic based on this dependency. The algorithm consists of -two phases, an initial setup phase during which each sensor computes and -prioritizes the covers and a sensing phase during which each sensor first -decides its on/off status, and then remains on or off for the rest of the -duration. - -The authors in \cite{yardibi2010distributed} have developed a Distributed -Adaptive Sleep Scheduling Algorithm (DASSA) for WSNs with partial coverage. -DASSA does not require location information of sensors while maintaining -connectivity and satisfying a user defined coverage target. In DASSA, nodes use -the residual energy levels and feedback from the sink for scheduling the -activity of their neighbors. This feedback mechanism reduces the randomness in -scheduling that would otherwise occur due to the absence of location -information. In \cite{ChinhVu}, the author have proposed a novel distributed -heuristic, called Distributed Energy-efficient Scheduling for k-coverage (DESK), -which ensures that the energy consumption among the sensors is balanced and the -lifetime maximized while the coverage requirement is maintained. This heuristic -works in rounds, requires only one-hop neighbor information, and each sensor -decides its status (active or sleep) based on the perimeter coverage model -proposed in \cite{Huang:2003:CPW:941350.941367}. - -%Our Work, which is presented in~\cite{idrees2014coverage} proposed a coverage optimization protocol to improve the lifetime in -%heterogeneous energy wireless sensor networks. -%In this work, the coverage protocol distributed in each sensor node in the subregion but the optimization take place over the the whole subregion. We consider only distributing the coverage protocol over two subregions. - -The works presented in \cite{Bang, Zhixin, Zhang} focus on coverage-aware, -distributed energy-efficient, and distributed clustering methods respectively, -which aim to extend the network lifetime, while the coverage is ensured. S. -Misra et al. \cite{Misra} have proposed a localized algorithm for coverage in -sensor networks. The algorithm conserve the energy while ensuring the network -coverage by activating the subset of sensors with the minimum overlap area. The -proposed method preserves the network connectivity by formation of the network -backbone. More recently, Shibo et al. \cite{Shibo} have expressed the coverage -problem as a minimum weight submodular set cover problem and proposed a -Distributed Truncated Greedy Algorithm (DTGA) to solve it. They take advantage -from both temporal and spatial correlations between data sensed by different -sensors, and leverage prediction, to improve the lifetime. In -\cite{xu2001geography}, Xu et al. have proposed an algorithm, called -Geographical Adaptive Fidelity (GAF), which uses geographic location information -to divide the area of interest into fixed square grids. Within each grid, it -keeps only one node staying awake to take the responsibility of sensing and -communication. - -Some other approaches (outside the scope of our work) do not consider a -synchronized and predetermined period of time where the sensors are active or -not. Indeed, each sensor maintains its own timer and its wake-up time is -randomized \cite{Ye03} or regulated \cite{cardei2005maximum} over time. - -The MuDiLCO protocol (for Multiround Distributed Lifetime Coverage Optimization -protocol) presented in this paper is an extension of the approach introduced -in~\cite{idrees2014coverage}. In~\cite{idrees2014coverage}, the protocol is -deployed over only two subregions. Simulation results have shown that it was -more interesting to divide the area into several subregions, given the -computation complexity. Compared to our previous paper, in this one we study the -possibility of dividing the sensing phase into multiple rounds and we also add -an improved model of energy consumption to assess the efficiency of our -approach. - - - - -\fi -%The main contributions of our MuDiLCO Protocol can be summarized as follows: -%(1) The high coverage ratio, (2) The reduced number of active nodes, (3) The distributed optimization over the subregions in the area of interest, (4) The distributed dynamic leader election at each round based on some priority factors that led to energy consumption balancing among the nodes in the same subregion, (5) The primary point coverage model to represent each sensor node in the network, (6) The activity scheduling based optimization on the subregion, which are based on the primary point coverage model to activate as less number as possible of sensor nodes for a multirounds to take the mission of the coverage in each subregion, (7) The very low energy consumption, (8) The higher network lifetime. -%\section{Preliminaries} -%\label{Pr} - -%Network Lifetime - -%\subsection{Network Lifetime} -%Various definitions exist for the lifetime of a sensor -%network~\cite{die09}. The main definitions proposed in the literature are -%related to the remaining energy of the nodes or to the coverage percentage. -%The lifetime of the network is mainly defined as the amount -%of time during which the network can satisfy its coverage objective (the -%amount of time that the network can cover a given percentage of its -%area or targets of interest). In this work, we assume that the network -%is alive until all nodes have been drained of their energy or the -%sensor network becomes disconnected, and we measure the coverage ratio -%during the WSN lifetime. Network connectivity is important because an -%active sensor node without connectivity towards a base station cannot -%transmit information on an event in the area that it monitors. - \section{MuDiLCO protocol description} \label{pd} -%Our work will concentrate on the area coverage by design -%and implementation of a strategy, which efficiently selects the active -%nodes that must maintain both sensing coverage and network -%connectivity and at the same time improve the lifetime of the wireless -%sensor network. But, requiring that all physical points of the -%considered region are covered may be too strict, especially where the -%sensor network is not dense. Our approach represents an area covered -%by a sensor as a set of primary points and tries to maximize the total -%number of primary points that are covered in each round, while -%minimizing overcoverage (points covered by multiple active sensors -%simultaneously). - -%In this section, we introduce a Multiround Distributed Lifetime Coverage Optimization protocol, which is called MuDiLCO. It is distributed on each subregion in the area of interest. It is based on two efficient techniques: network -%leader election and sensor activity scheduling for coverage preservation and energy conservation continuously and efficiently to maximize the lifetime in the network. -%The main features of our MuDiLCO protocol: -%i)It divides the area of interest into subregions by using divide-and-conquer concept, ii)It requires only the information of the nodes within the subregion, iii) it divides the network lifetime into periods, which consists in round(s), iv)It based on the autonomous distributed decision by the nodes in the subregion to elect the Leader, v)It apply the activity scheduling based optimization on the subregion, vi) it achieves an energy consumption balancing among the nodes in the subregion by selecting different nodes as a leader during the network lifetime, vii) It uses the optimization to select the best representative non-disjoint sets of sensors in the subregion by optimize the coverage and the lifetime over the area of interest, viii)It uses our proposed primary point coverage model, which represent the sensing range of the sensor as a set of points, which are used by the our optimization algorithm, ix) It uses a simple energy model that takes communication, sensing and computation energy consumptions into account to evaluate the performance of our Protocol. - -\subsection{Assumptions} - -We consider a randomly and uniformly deployed network consisting of static -wireless sensors. The sensors are deployed in high density to ensure initially -a high coverage ratio of the interested area. We assume that all nodes are -homogeneous in terms of communication and processing capabilities, and -heterogeneous from the point of view of energy provision. Each sensor is -supposed to get information on its location either through hardware such as -embedded GPS or through location discovery algorithms. - -To model a sensor node's coverage area, we consider the boolean disk coverage -model which is the most widely used sensor coverage model in the -literature. Thus, each sensor has a constant sensing range $R_s$ and all space -points within the disk centered at the sensor with the radius of the sensing -range is said to be covered by this sensor. We also assume that the -communication range satisfies $R_c \geq 2R_s$. In fact, Zhang and -Zhou~\cite{Zhang05} proved that if the transmission range fulfills the previous -hypothesis, a complete coverage of a convex area implies connectivity among the -active nodes. - -%Instead of working with a continuous coverage area, we make it discrete by considering for each sensor a set of points called primary points. Consequently, we assume that the sensing disk defined by a sensor is covered if all of its primary points are covered. The choice of number and locations of primary points is the subject of another study not presented here. - - -\indent Instead of working with the coverage area, we consider for each sensor a set of points called primary points~\cite{idrees2014coverage}. We assume that the sensing disk defined by a sensor is covered if all the primary points of this sensor are covered. By knowing the position (point center: ($p_x,p_y$)) of a wireless sensor node and it's sensing range $R_s$, we define up to 25 primary points $X_1$ to $X_{25}$ as decribed on Figure~\ref{fig1}. The coordinates of the primary points are the following :\\ +\subsection{Assumptions and primary points} +\label{pp} + +\textcolor{blue}{The assumptions and the coverage model are identical to those presented + in~\cite{idrees2015distributed}. We consider a scenario in which sensors are deployed in high + density to initially ensure a high coverage ratio of the interested area. Each + sensor has a predefined sensing range $R_s$, an initial energy supply + (eventually different from each other) and is supposed to be equipped with + a module to locate its geographical positions. All space points within the + disk centered at the sensor with the radius of the sensing range are said to be + covered by this sensor.} + +\indent Instead of working with the coverage area, we consider for each sensor a +set of points called primary points~\cite{idrees2014coverage}. We assume that +the sensing disk defined by a sensor is covered if all the primary points of +this sensor are covered. By knowing the position of wireless sensor node +(centered at the the position $\left(p_x,p_y\right)$) and its sensing range +$R_s$, we define up to 25 primary points $X_1$ to $X_{25}$ as described on +Figure~\ref{fig1}. The optimal number of primary points is investigated in +section~\ref{ch4:sec:04:06}. + +The coordinates of the primary points are defined as follows:\\ %$(p_x,p_y)$ = point center of wireless sensor node\\ $X_1=(p_x,p_y)$ \\ $X_2=( p_x + R_s * (1), p_y + R_s * (0) )$\\ $X_3=( p_x + R_s * (-1), p_y + R_s * (0)) $\\ $X_4=( p_x + R_s * (0), p_y + R_s * (1) )$\\ $X_5=( p_x + R_s * (0), p_y + R_s * (-1 )) $\\ -$X_6= ( p_x + R_s * (\frac{-\sqrt{2}}{2}), p_y + R_s * (0)) $\\ -$X_7=( p_x + R_s * (\frac{\sqrt{2}}{2}), p_y + R_s * (0))$\\ +$X_6=( p_x + R_s * (\frac{-\sqrt{2}}{2}), p_y + R_s * (\frac{\sqrt{2}}{2})) $\\ +$X_7=( p_x + R_s * (\frac{\sqrt{2}}{2}), p_y + R_s * (\frac{\sqrt{2}}{2})) $\\ $X_8=( p_x + R_s * (\frac{-\sqrt{2}}{2}), p_y + R_s * (\frac{-\sqrt{2}}{2})) $\\ $X_9=( p_x + R_s * (\frac{\sqrt{2}}{2}), p_y + R_s * (\frac{-\sqrt{2}}{2})) $\\ -$X_{10}=( p_x + R_s * (\frac{-\sqrt{2}}{2}), p_y + R_s * (\frac{\sqrt{2}}{2})) $\\ -$X_{11}=( p_x + R_s * (\frac{\sqrt{2}}{2}), p_y + R_s * (\frac{\sqrt{2}}{2})) $\\ +$X_{10}= ( p_x + R_s * (\frac{-\sqrt{2}}{2}), p_y + R_s * (0)) $\\ +$X_{11}=( p_x + R_s * (\frac{\sqrt{2}}{2}), p_y + R_s * (0))$\\ $X_{12}=( p_x + R_s * (0), p_y + R_s * (\frac{\sqrt{2}}{2})) $\\ $X_{13}=( p_x + R_s * (0), p_y + R_s * (\frac{-\sqrt{2}}{2})) $\\ $X_{14}=( p_x + R_s * (\frac{\sqrt{3}}{2}), p_y + R_s * (\frac{1}{2})) $\\ @@ -569,186 +337,154 @@ $X_{23}=( p_x + R_s * (\frac{- 1}{2}), p_y + R_s * (\frac{\sqrt{3}}{2})) $\\ $X_{24}=( p_x + R_s * (\frac{- 1}{2}), p_y + R_s * (\frac{-\sqrt{3}}{2})) $\\ $X_{25}=( p_x + R_s * (\frac{1}{2}), p_y + R_s * (\frac{-\sqrt{3}}{2})) $. - - -\begin{figure} %[h!] -\centering - \begin{multicols}{2} -\centering -\includegraphics[scale=0.28]{fig21.pdf}\\~ (a) -\includegraphics[scale=0.28]{principles13.pdf}\\~(c) -\hfill \hfill -\includegraphics[scale=0.28]{fig25.pdf}\\~(e) -\includegraphics[scale=0.28]{fig22.pdf}\\~(b) -\hfill \hfill -\includegraphics[scale=0.28]{fig24.pdf}\\~(d) -\includegraphics[scale=0.28]{fig26.pdf}\\~(f) -\end{multicols} -\caption{Wireless Sensor Node represented by (a) 5, (b) 9, (c) 13, (d) 17, (e) 21 and (f) 25 primary points respectively} -\label{fig1} +\begin{figure}[h] + \centering + \includegraphics[scale=0.375]{fig26.pdf} + \label{fig1} + \caption{Wireless sensor node represented by up to 25~primary points} \end{figure} - - - - - - -%By knowing the position (point center: ($p_x,p_y$)) of a wireless -%sensor node and its $R_s$, we calculate the primary points directly -%based on the proposed model. We use these primary points (that can be -%increased or decreased if necessary) as references to ensure that the -%monitored region of interest is covered by the selected set of -%sensors, instead of using all the points in the area. - -%The MuDiLCO protocol works in periods and executed at each sensor node in the network, each sensor node can still sense data while being in -%LISTENING mode. Thus, by entering the LISTENING mode at the beginning of each round, -%sensor nodes still executing sensing task while participating in the leader election and decision phases. More specifically, The MuDiLCO protocol algorithm works as follow: -%Initially, the sensor node check it's remaining energy in order to participate in the current round. Each sensor node determines it's position and it's subregion based Embedded GPS or Location Discovery Algorithm. After that, All the sensors collect position coordinates, current remaining energy, sensor node id, and the number of its one-hop live neighbors during the information exchange. It stores this information into a list $L$. -%The sensor node enter in listening mode waiting to receive ActiveSleep packet from the leader after the decision to apply multi-round activity scheduling during the sensing phase. Each sensor node will execute the Algorithm~1 to know who is the leader. After that, if the sensor node is leader, It will execute the integer program algorithm ( see section~\ref{cp}) to optimize the coverage and the lifetime in it's subregion. After the decision, the optimization approach will produce the cover sets of sensor nodes to take the mission of coverage during the sensing phase for $T$ rounds. The leader will send ActiveSleep packet to each sensor node in the subregion to inform him to it's schedule for $T$ rounds during the period of sensing, either Active or sleep until the starting of next period. Based on the decision, the leader as other nodes in subregion, either go to be active or go to be sleep based on it's schedule for $T$ rounds during current sensing phase. the other nodes in the same subregion will stay in listening mode waiting the ActiveSleep packet from the leader. After finishing the time period for sensing, which are includes $T$ rounds, all the sensor nodes in the same subregion will start new period by executing the MuDiLCO protocol and the lifetime in the subregion will continue until all the sensor nodes are died or the network becomes disconnected in the subregion. \subsection{Background idea} -%%RC : we need to clarify the difference between round and period. Currently it seems to be the same (for me at least). -%The area of interest can be divided using the divide-and-conquer strategy into -%smaller areas, called subregions, and then our MuDiLCO protocol will be -%implemented in each subregion in a distributed way. - -\textcolor{green}{The WSN area of interest is, in a first step, divided into regular homogeneous -subregions using a divide-and-conquer algorithm. In a second step our protocol -will be executed in a distributed way in each subregion simultaneously to -schedule nodes' activities for one sensing period. Sensor nodes are assumed to -be deployed almost uniformly over the region. The regular subdivision is made -such that the number of hops between any pairs of sensors inside a subregion is -less than or equal to 3.} - -As can be seen in Figure~\ref{fig2}, our protocol works in periods fashion, -where each is divided into 4 phases: Information~Exchange, Leader~Election, -Decision, and Sensing. Each sensing phase may be itself divided into $T$ rounds -\textcolor{green} {of equal duration} and for each round a set of sensors (a cover set) is responsible for the sensing -task. In this way a multiround optimization process is performed during each -period after Information~Exchange and Leader~Election phases, in order to -produce $T$ cover sets that will take the mission of sensing for $T$ rounds. -\begin{figure}[ht!] -\centering \includegraphics[width=100mm]{Modelgeneral.pdf} % 70mm + +The WSN area of interest is, at first, divided into regular homogeneous +subregions using a divide-and-conquer algorithm. Then, our protocol will be +executed in a distributed way in each subregion simultaneously to schedule +nodes' activities for one sensing period. Sensor nodes are assumed to be +deployed almost uniformly and with high density over the region. The regular +subdivision is made so that the number of hops between any pairs of sensors +inside a subregion is less than or equal to 3. + +As can be seen in Figure~\ref{fig2}, our protocol works in periods fashion, +where each period is divided into 4~phases: Information~Exchange, +Leader~Election, Decision, and Sensing. \textcolor{blue}{Compared to + the DiLCO protocol described in~\cite{idrees2015distributed},} each sensing phase is itself +divided into $T$ rounds of equal duration and for each round a set of sensors (a +cover set) is responsible for the sensing task. In this way a multiround +optimization process is performed during each period after Information~Exchange +and Leader~Election phases, in order to produce $T$ cover sets that will take +the mission of sensing for $T$ +rounds. \textcolor{blue}{Algorithm~\ref{alg:MuDiLCO} is executed by each sensor + node~$s_j$ (with enough remaining energy) at the beginning of a period.} +\begin{figure}[t!] +\centering \includegraphics[width=125mm]{Modelgeneral.pdf} % 70mm \caption{The MuDiLCO protocol scheme executed on each node} \label{fig2} -\end{figure} - -%Each period is divided into 4 phases: Information Exchange, -%Leader Election, Decision, and Sensing. Each sensing phase may be itself divided into $T$ rounds. -% set cover responsible for the sensing task. -%For each round a set of sensors (said a cover set) is responsible for the sensing task. - -This protocol minimizes the impact of unexpected node failure (not due to batteries -running out of energy), because it works in periods. -%This protocol is reliable against an unexpected node failure, because it works in periods. -%%RC : why? I am not convinced - On the one hand, if a node failure is detected before making the -decision, the node will not participate to this phase, and, on the other hand, -if the node failure occurs after the decision, the sensing task of the network -will be temporarily affected: only during the period of sensing until a new -period starts. \textcolor{green}{The duration of the rounds are predefined parameters. Round duration should be long enough to hide the system control overhead and short enough to minimize the negative effects in case of node failure.} - -%%RC so if there are at least one failure per period, the coverage is bad... -%%MS if we want to be reliable against many node failures we need to have an -%% overcoverage... - -The energy consumption and some other constraints can easily be taken into -account, since the sensors can update and then exchange their information -(including their residual energy) at the beginning of each period. However, the -pre-sensing phases (Information Exchange, Leader Election, and Decision) are -energy consuming for some nodes, even when they do not join the network to -monitor the area. +\end{figure} -%%%%%%%%%%%%%%%%%parler optimisation%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\begin{algorithm}[h!] + \BlankLine + \If{ $RE_j \geq E_{R}$ }{ + \emph{$s_j.status$ = COMMUNICATION}\; + \emph{Send $INFO()$ packet to other nodes in the subregion}\; + \emph{Wait $INFO()$ packet from other nodes in the subregion}\; + + \emph{LeaderID = Leader election}\; + \If{$ s_j.ID = LeaderID $}{ + \emph{$s_j.status$ = COMPUTATION}\; + \emph{$\left\{\left(X_{1,k},\dots,X_{T,k}\right)\right\}_{k \in J}$ = + Execute Integer Program Algorithm($T,J$)}\; + \emph{$s_j.status$ = COMMUNICATION}\; + \emph{Send $ActiveSleep()$ packet to each node $k$ in subregion: a packet \\ + with vector of activity scheduling $(X_{1,k},\dots,X_{T,k})$}\; + \emph{Update $RE_j $}\; + } + \Else{ + \emph{$s_j.status$ = LISTENING}\; + \emph{Wait $ActiveSleep()$ packet from the Leader}\; + \emph{Update $RE_j $}\; + } + } + \Else { Exclude $s_j$ from entering in the current sensing phase} + +\caption{MuDiLCO($s_j$)} +\label{alg:MuDiLCO} +\end{algorithm} -We define two types of packets that will be used by the proposed protocol: +\textcolor{blue}{As already described in~\cite{idrees2015distributed}}, two +types of packets are used by the proposed protocol: \begin{enumerate}[(a)] -\item INFO packet: such a packet will be sent by each sensor node to all the +\item INFO packet: such a packet will be sent by each sensor node to all the nodes inside a subregion for information exchange. \item Active-Sleep packet: sent by the leader to all the nodes inside a - subregion to inform them to remain Active or to go Sleep during the sensing + subregion to inform them to remain Active or to go Sleep during the sensing phase. \end{enumerate} There are five status for each sensor node in the network: \begin{enumerate}[(a)] \item LISTENING: sensor node is waiting for a decision (to be active or not); -\item COMPUTATION: sensor node has been elected as leader and applies the +\item COMPUTATION: sensor node has been elected as leader and applies the optimization process; \item ACTIVE: sensor node is taking part in the monitoring of the area; \item SLEEP: sensor node is turned off to save energy; \item COMMUNICATION: sensor node is transmitting or receiving packet. \end{enumerate} -Below, we describe each phase in more details. - -\subsection{Information Exchange Phase} - -Each sensor node $j$ sends its position, remaining energy $RE_j$, and the number -of neighbors $NBR_j$ to all wireless sensor nodes in its subregion by using an -INFO packet (containing information on position coordinates, current remaining -energy, sensor node ID, number of its one-hop live neighbors) and then waits for -packets sent by other nodes. After that, each node will have information about -all the sensor nodes in the subregion. In our model, the remaining energy -corresponds to the time that a sensor can live in the active mode. +This protocol minimizes the impact of unexpected node failure (not due to +batteries running out of energy), because it works in periods. On the one hand, +if a node failure is detected before making the decision, the node will not +participate to this phase, and, on the other hand, if the node failure occurs +after the decision, the sensing task of the network will be temporarily +affected: only during the period of sensing until a new period starts. The +duration of the rounds is a predefined parameter. Round duration should be long +enough to hide the system control overhead and short enough to minimize the +negative effects in case of node failures. -%\subsection{\textbf Working Phase:} - -%The working phase works in rounding fashion. Each round include 3 steps described as follow : - -\subsection{Leader Election phase} +The energy consumption and some other constraints can easily be taken into +account, since the sensors can update and then exchange their information +(including their residual energy) at the beginning of each period. However, the +pre-sensing phases (Information Exchange, Leader Election, and Decision) are +energy consuming for some nodes, even when they do not join the network to +monitor the area. -This step consists in choosing the Wireless Sensor Node Leader (WSNL), which -will be responsible for executing the coverage algorithm. Each subregion in the -area of interest will select its own WSNL independently for each period. All -the sensor nodes cooperate to elect a WSNL. The nodes in the same subregion -will select the leader based on the received information from all other nodes -in the same subregion. The selection criteria are, in order of importance: -larger number of neighbors, larger remaining energy, and then in case of +At the beginning of each period, each sensor which has enough remaining energy +($RE_j$) to be alive during at least one round ($E_{R}$ is the amount of energy +required to be alive during one round) sends (line 3 of +Algorithm~\ref{alg:MuDiLCO}) its position, remaining energy $RE_j$, and the +number of neighbors $NBR_j$ to all wireless sensor nodes in its subregion by +using an INFO packet (containing information on position coordinates, current +remaining energy, sensor node ID, number of its one-hop live neighbors) and then +waits for packets sent by other nodes (line 4). + +After that, each node will have information about all the sensor nodes in the +subregion. The nodes in the same subregion will select (line 5) a Wireless +Sensor Node Leader (WSNL) based on the received information from all other nodes +in the same subregion. The selection criteria are, in order of importance: +larger number of neighbors, larger remaining energy, and then in case of equality, larger index. Observations on previous simulations suggest to use the number of one-hop neighbors as the primary criterion to reduce energy consumption due to the communications. -%the more priority selection factor is the number of $1-hop$ neighbors, $NBR j$, which can minimize the energy consumption during the communication Significantly. -%The pseudo-code for leader election phase is provided in Algorithm~1. - -%Where $E_{th}$ is the minimum energy needed to stay active during the sensing phase. As shown in Algorithm~1, the more priority selection factor is the number of $1-hop$ neighbours, $NBR j$, which can minimize the energy consumption during the communication Significantly. - -\subsection{Decision phase} - -Each WSNL will \textcolor{green}{ solve an integer program to select which cover sets will be -activated in the following sensing phase to cover the subregion to which it -belongs. $T$ cover sets will be produced, one for each round. The WSNL will send an Active-Sleep packet to each sensor in the subregion based on the algorithm's results, indicating if the sensor should be active or not in -each round of the sensing phase. } -%Each WSNL will \textcolor{red}{ execute an optimization algorithm (see section \ref{oa})} to select which cover sets will be -%activated in the following sensing phase to cover the subregion to which it -%belongs. The \textcolor{red}{optimization algorithm} will produce $T$ cover sets, one for each round. The WSNL will send an Active-Sleep packet to each sensor in the subregion based on the algorithm's results, indicating if the sensor should be active or not in -%each round of the sensing phase. - - -%solve an integer program - - - - - - - -%\section{\textcolor{red}{ Optimization Algorithm for Multiround Lifetime Coverage Optimization}} -%\label{oa} -As shown in Algorithm~\ref{alg:MuDiLCO}, the leader will execute an optimization algorithm based on an integer program. The integer program is based on the model -proposed by \cite{pedraza2006} with some modifications, where the objective is -to find a maximum number of disjoint cover sets. To fulfill this goal, the -authors proposed an integer program which forces undercoverage and overcoverage -of targets to become minimal at the same time. They use binary variables -$x_{jl}$ to indicate if sensor $j$ belongs to cover set $l$. In our model, we -consider binary variables $X_{t,j}$ to determine the possibility of activating -sensor $j$ during round $t$ of a given sensing phase. We also consider primary -points as targets. The set of primary points is denoted by $P$ and the set of -sensors by $J$. Only sensors able to be alive during at least one round are -involved in the integer program. - -%parler de la limite en energie Et pour un round +%Each WSNL will solve an integer program to select which cover +% sets will be activated in the following sensing phase to cover the subregion +% to which it belongs. $T$ cover sets will be produced, one for each round. The +% WSNL will send an Active-Sleep packet to each sensor in the subregion based on +% the algorithm's results, indicating if the sensor should be active or not in +% each round of the sensing phase. +\subsection{Multiround Optimization model} +\label{mom} + +As shown in Algorithm~\ref{alg:MuDiLCO} at line 8, the leader (WNSL) will +execute an optimization algorithm based on an integer program to select the +cover sets to be activated in the following sensing phase to cover the subregion +to which it belongs. $T$ cover sets will be produced, one for each round. The +WSNL will send an Active-Sleep packet to each sensor in the subregion based on +the algorithm's results (line 10), indicating if the sensor should be active or +not in each round of the sensing phase. + +The integer program is based on the model proposed by \cite{pedraza2006} with +some modifications, where the objective is to find a maximum number of disjoint +cover sets. To fulfill this goal, the authors proposed an integer program which +forces undercoverage and overcoverage of targets to become minimal at the same +time. They use binary variables $x_{jl}$ to indicate if sensor $j$ belongs to +cover set $l$. In our model, we consider binary variables $X_{t,j}$ to +determine the possibility of activating sensor $j$ during round $t$ of a given +sensing phase. We also consider primary points as targets. The set of primary +points is denoted by $P$ and the set of sensors by $J$. Only sensors able to be +alive during at least one round are involved in the integer program. +\textcolor{blue}{Note that the proposed integer program is an + extension of the one formulated in~\cite{idrees2015distributed}, variables are now indexed in + addition with the number of round $t$.} For a primary point $p$, let $\alpha_{j,p}$ denote the indicator function of whether the point $p$ is covered, that is: @@ -777,7 +513,7 @@ We define the Overcoverage variable $\Theta_{t,p}$ as: \begin{array}{l l} 0 & \mbox{if the primary point $p$}\\ & \mbox{is not covered during round $t$,}\\ - \left( \sum_{j \in J} \alpha_{jp} * X_{tj} \right)- 1 & \mbox{otherwise.}\\ + \left( \sum_{j \in J} \alpha_{jp} * X_{t,j} \right)- 1 & \mbox{otherwise.}\\ \end{array} \right. \label{eq13} \end{equation} @@ -796,7 +532,7 @@ U_{t,p} = \left \{ Our coverage optimization problem can then be formulated as follows: \begin{equation} - \min \sum_{t=1}^{T} \sum_{p=1}^{P} \left(W_{\theta}* \Theta_{t,p} + W_{U} * U_{t,p} \right) \label{eq15} + \min \sum_{t=1}^{T} \sum_{p=1}^{|P|} \left(W_{\theta}* \Theta_{t,p} + W_{U} * U_{t,p} \right) \label{eq15} \end{equation} Subject to @@ -821,12 +557,6 @@ U_{t,p} \in \lbrace0,1\rbrace, \hspace{10 mm}\forall p \in P, t = 1,\dots,T \la \Theta_{t,p} \geq 0 \hspace{10 mm}\forall p \in P, t = 1,\dots,T \label{eq178} \end{equation} -%\begin{equation} -%(W_{\theta}+W_{\psi} = P) \label{eq19} -%\end{equation} - -%%RC why W_{\theta} is not defined (only one sentence)? How to define in practice Wtheta and Wu? - \begin{itemize} \item $X_{t,j}$: indicates whether or not the sensor $j$ is actively sensing during round $t$ (1 if yes and 0 if not); @@ -839,446 +569,130 @@ U_{t,p} \in \lbrace0,1\rbrace, \hspace{10 mm}\forall p \in P, t = 1,\dots,T \la The first group of constraints indicates that some primary point $p$ should be covered by at least one sensor and, if it is not always the case, overcoverage -and undercoverage variables help balancing the restriction equations by taking +and undercoverage variables help balancing the restriction equations by taking positive values. The constraint given by equation~(\ref{eq144}) guarantees that the sensor has enough energy ($RE_j$ corresponds to its remaining energy) to be alive during the selected rounds knowing that $E_{R}$ is the amount of energy required to be alive during one round. -There are two main objectives. First, we limit the overcoverage of primary -points in order to activate a minimum number of sensors. Second we prevent the -absence of monitoring on some parts of the subregion by minimizing the -undercoverage. The weights $W_\theta$ and $W_U$ must be properly chosen so as -to guarantee that the maximum number of points are covered during each round. -%% MS W_theta is smaller than W_u => problem with the following sentence -In our simulations priority is given to the coverage by choosing $W_{U}$ very +There are two main objectives. First, we limit the overcoverage of primary +points in order to activate a minimum number of sensors. Second we prevent the +absence of monitoring on some parts of the subregion by minimizing the +undercoverage. The weights $W_\theta$ and $W_U$ must be properly chosen so as +to guarantee that the maximum number of points are covered during each round. +In our simulations, priority is given to the coverage by choosing $W_{U}$ very large compared to $W_{\theta}$. -\textcolor{green}{The size of the problem depends on the number of variables and constraints. The number of variables is linked to the number of alive sensors $A \subset J$, the number of rounds $T$, and the number of primary points $P$. Thus the integer program contains $A*T$ variables of type $X_{t,j}$, $P*T$ overcoverage variables and $P*T$ undercoverage variables. The number of constraints is equal to $P*T$ (for constraints (\ref{eq16})) $+$ $A$ (for constraints (\ref{eq144})).} -%The Active-Sleep packet includes the schedule vector with the number of rounds that should be applied by the receiving sensor node during the sensing phase. - - -\subsection{Sensing phase} - -The sensing phase consists of $T$ rounds. Each sensor node in the subregion will -receive an Active-Sleep packet from WSNL, informing it to stay awake or to go to -sleep for each round of the sensing phase. Algorithm~\ref{alg:MuDiLCO}, which -will be executed by each node at the beginning of a period, explains how the -Active-Sleep packet is obtained. - -% In each round during the sensing phase, there is a cover set of sensor nodes, in which the active sensors will execute their sensing task to preserve maximal coverage and lifetime in the subregion and this will continue until finishing the round $T$ and starting new period. - -\begin{algorithm}[h!] - % \KwIn{all the parameters related to information exchange} -% \KwOut{$winer-node$ (: the id of the winner sensor node, which is the leader of current round)} - \BlankLine - %\emph{Initialize the sensor node and determine it's position and subregion} \; - - \If{ $RE_j \geq E_{R}$ }{ - \emph{$s_j.status$ = COMMUNICATION}\; - \emph{Send $INFO()$ packet to other nodes in the subregion}\; - \emph{Wait $INFO()$ packet from other nodes in the subregion}\; - %\emph{UPDATE $RE_j$ for every sent or received INFO Packet}\; - %\emph{ Collect information and construct the list L for all nodes in the subregion}\; - - %\If{ the received INFO Packet = No. of nodes in it's subregion -1 }{ - \emph{LeaderID = Leader election}\; - \If{$ s_j.ID = LeaderID $}{ - \emph{$s_j.status$ = COMPUTATION}\; - \emph{$\left\{\left(X_{1,k},\dots,X_{T,k}\right)\right\}_{k \in J}$ = - Execute \textcolor{red}{Optimization Algorithm}($T,J$)}\; - \emph{$s_j.status$ = COMMUNICATION}\; - \emph{Send $ActiveSleep()$ to each node $k$ in subregion a packet \\ - with vector of activity scheduling $(X_{1,k},\dots,X_{T,k})$}\; - \emph{Update $RE_j $}\; - } - \Else{ - \emph{$s_j.status$ = LISTENING}\; - \emph{Wait $ActiveSleep()$ packet from the Leader}\; - % \emph{After receiving Packet, Retrieve the schedule and the $T$ rounds}\; - \emph{Update $RE_j $}\; - } - % } - } - \Else { Exclude $s_j$ from entering in the current sensing phase} - - % \emph{return X} \; -\caption{MuDiLCO($s_j$)} -\label{alg:MuDiLCO} - -\end{algorithm} - -\iffalse -\textcolor{red}{This integer program can be solved using two approaches:} - -\subsection{\textcolor{red}{Optimization solver for Multiround Lifetime Coverage Optimization}} -\label{glpk} -\textcolor{red}{The modeling language for Mathematical Programming (AMPL)~\cite{AMPL} is employed to generate the integer program instance in a standard format, which is then read and solved by the optimization solver GLPK (GNU linear Programming Kit available in the public domain) \cite{glpk} through a Branch-and-Bound method. We named the protocol which is based on GLPK solver in the decision phase as MuDiLCO.} -\fi - -\iffalse - -\subsection{\textcolor{red}{Genetic Algorithm for Multiround Lifetime Coverage Optimization}} -\label{GA} -\textcolor{red}{Metaheuristics are a generic search strategies for exploring search spaces for solving the complex problems. These strategies have to dynamically balance between the exploitation of the accumulated search experience and the exploration of the search space. On one hand, this balance can find regions in the search space with high-quality solutions. On the other hand, it prevents waste too much time in regions of the search space which are either already explored or don’t provide high-quality solutions. Therefore, metaheuristic provides an enough good solution to an optimization problem, especially with incomplete information or limited computation capacity \cite{bianchi2009survey}. Genetic Algorithm (GA) is one of the population-based metaheuristic methods that simulates the process of natural selection \cite{hassanien2015applications}. GA starts with a population of random candidate solutions (called individuals or phenotypes) . GA uses genetic operators inspired by natural evolution, such as selection, mutation, evaluation, crossover, and replacement so as to improve the initial population of candidate solutions. This process repeated until a stopping criterion is satisfied. In comparison with GLPK optimization solver, GA provides a near optimal solution with acceptable execution time, as well as it requires a less amount of memory especially for large size problems. GLPK provides optimal solution, but it requires higher execution time and amount of memory for large problem.} - -\textcolor{red}{In this section, we present a metaheuristic based GA to solve our multiround lifetime coverage optimization problem. The proposed GA provides a near optimal sechedule for multiround sensing per period. The proposed GA is based on the mathematical model which is presented in Section \ref{oa}. Algorithm \ref{alg:GA} shows the proposed GA to solve the coverage lifetime optimization problem. We named the new protocol which is based on GA in the decision phase as GA-MuDiLCO. The proposed GA can be explained in more details as follow:} - -\begin{algorithm}[h!] - - \small - \SetKwInput{Input}{\textcolor{red}{Input}} - \SetKwInput{Output}{\textcolor{red}{Output}} - \Input{ \textcolor{red}{$ P, J, T, S_{pop}, \alpha_{j,p}^{ind}, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind}, Child_{t,j}^{ind}, Ch.\Theta_{t,p}^{ind}, Ch.U_{t,p}^{ind_1}$}} - \Output{\textcolor{red}{$\left\{\left(X_{1,1},\dots, X_{t,j}, \dots, X_{T,J}\right)\right\}_{t \in T, j \in J}$}} - - \BlankLine - %\emph{Initialize the sensor node and determine it's position and subregion} \; - \ForEach {\textcolor{red}{Individual $ind$ $\in$ $S_{pop}$}} { - \emph{\textcolor{red}{Generate Randomly Chromosome $\left\{\left(X_{1,1},\dots, X_{t,j}, \dots, X_{T,J}\right)\right\}_{t \in T, j \in J}$}}\; - - \emph{\textcolor{red}{Update O-U-Coverage $\left\{(P, J, \alpha_{j,p}^{ind}, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind})\right\}_{p \in P}$}}\; - - - \emph{\textcolor{red}{Evaluate Individual $(P, J, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind})$}}\; - } - - \While{\textcolor{red}{ Stopping criteria is not satisfied} }{ - - \emph{\textcolor{red}{Selection $(ind_1, ind_2)$}}\; - \emph{\textcolor{red}{Crossover $(P_c, X_{t,j}^{ind_1}, X_{t,j}^{ind_2}, Child_{t,j}^{ind_1}, Child_{t,j}^{ind_2})$}}\; - \emph{\textcolor{red}{Mutation $(P_m, Child_{t,j}^{ind_1}, Child_{t,j}^{ind_2})$}}\; - - - \emph{\textcolor{red}{Update O-U-Coverage $(P, J, \alpha_{j,p}^{ind}, Child_{t,j}^{ind_1}, Ch.\Theta_{t,p}^{ind_1}, Ch.U_{t,p}^{ind_1})$}}\; - \emph{\textcolor{red}{Update O-U-Coverage $(P, J, \alpha_{j,p}^{ind}, Child_{t,j}^{ind_2}, Ch.\Theta_{t,p}^{ind_2}, Ch.U_{t,p}^{ind_2})$}}\; - -\emph{\textcolor{red}{Evaluate New Individual$(P, J, Child_{t,j}^{ind_1}, Ch.\Theta_{t,p}^{ind_1}, Ch.U_{t,p}^{ind_1})$}}\; - \emph{\textcolor{red}{Replacement $(P, J, T, Child_{t,j}^{ind_1}, Ch.\Theta_{t,p}^{ind_1}, Ch.U_{t,p}^{ind_1}, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind} )$ }}\; - - \emph{\textcolor{red}{Evaluate New Individual$(P, J, Child_{t,j}^{ind_2}, Ch.\Theta_{t,p}^{ind_2}, Ch.U_{t,p}^{ind_2})$}}\; - - \emph{\textcolor{red}{Replacement $(P, J, T, Child_{t,j}^{ind_2}, Ch.\Theta_{t,p}^{ind_2}, Ch.U_{t,p}^{ind_2}, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind} )$ }}\; - - - } - \emph{\textcolor{red}{$\left\{\left(X_{1,1},\dots,X_{t,j},\dots,X_{T,J}\right)\right\}$ = - Select Best Solution ($S_{pop}$)}}\; - \emph{\textcolor{red}{return X}} \; -\caption{\textcolor{red}{GA($T, J$)}} -\label{alg:GA} - -\end{algorithm} - - -\begin{enumerate} [I)] - -\item \textcolor{red}{\textbf{Representation:} Since the proposed GA's goal is to find the optimal schedule of the sensor nodes which take the responsibility of monitoring the subregion for $T$ rounds in the sensing phase, the chromosome is defined as a schedule for alive sensors and each chromosome contains $T$ rounds. The proposed GA uses binary representation, where each round in the schedule includes J genes, the total alive sensors in the subregion. Therefore, the gene of such a chromosome is a schedule of a sensor. In other words, The genes corresponding to active nodes have the value of one, the others are zero. Figure \ref{chromo} shows solution representation in the proposed GA.} -%[scale=0.3] -\begin{figure}[h!] -\centering - \includegraphics [scale=0.35] {rep.pdf} -\caption{Candidate Solution representation by the proposed GA. } -\label{chromo} -\end{figure} - - - -\item \textcolor{red}{\textbf{Initialize Population:} The initial population is randomly generated and each chromosome in the GA population represents a possible sensors schedule solution to cover the entire subregion for $T$ rounds during current period. Each sensor in the chromosome is given a random value (0 or 1) for all rounds. If the random value is 1, the remaining energy of this sensor should be adequate to activate this sensor during the current round. Otherwise, the value is set to 0. The energy constraint is applied for each sensor during all rounds. } - - -\item \textcolor{red}{\textbf{Update O-U-Coverage:} -After creating the initial population, The overcoverage $\Theta_{t,p}$ and undercoverage $U_{t,p}$ for each candidate solution are computed (see Algorithm \ref{OU}) so as to use them in the next step.} - -\begin{algorithm}[h!] - - \SetKwInput{Input}{\textcolor{red}{Input}} - \SetKwInput{Output}{\textcolor{red}{Output}} - \Input{ \textcolor{red}{parameters $P, J, ind, \alpha_{j,p}^{ind}, X_{t,j}^{ind}$}} - \Output{\textcolor{red}{$U^{ind} = \left\lbrace U_{1,1}^{ind}, \dots, U_{t,p}^{ind}, \dots, U_{T,P}^{ind} \right\rbrace$ and $\Theta^{ind} = \left\lbrace \Theta_{1,1}^{ind}, \dots, \Theta_{t,p}^{ind}, \dots, \Theta_{T,P}^{ind} \right\rbrace$}} - - \BlankLine - - \For{\textcolor{red}{$t\leftarrow 1$ \KwTo $T$}}{ - \For{\textcolor{red}{$p\leftarrow 1$ \KwTo $P$}}{ - - % \For{$i\leftarrow 0$ \KwTo $I_j$}{ - \emph{\textcolor{red}{$SUM\leftarrow 0$}}\; - \For{\textcolor{red}{$j\leftarrow 1$ \KwTo $J$}}{ - \emph{\textcolor{red}{$SUM \leftarrow SUM + (\alpha_{j,p}^{ind} \times X_{t,j}^{ind})$ }}\; - } - - \If { \textcolor{red}{SUM = 0}} { - \emph{\textcolor{red}{$U_{t,p}^{ind} \leftarrow 0$}}\; - \emph{\textcolor{red}{$\Theta_{t,p}^{ind} \leftarrow 1$}}\; - } - \Else{ - \emph{\textcolor{red}{$U_{t,p}^{ind} \leftarrow SUM -1$}}\; - \emph{\textcolor{red}{$\Theta_{t,p}^{ind} \leftarrow 0$}}\; - } - - } - - } -\emph{\textcolor{red}{return $U^{ind}, \Theta^{ind}$ }} \; -\caption{O-U-Coverage} -\label{OU} - -\end{algorithm} - - - -\item \textcolor{red}{\textbf{Evaluate Population:} -After creating the initial population, each individual is evaluated and assigned a fitness value according to the fitness function is illustrated in Eq. \eqref{eqf}. In the proposed GA, the optimal (or near optimal) candidate solution, is the one with the minimum value for the fitness function. The lower the fitness values been assigned to an individual, the better opportunity it gets survived. In our works, the function rewards the decrease in the sensor nodes which cover the same primary point and penalizes the decrease to zero in the sensor nodes which cover the primary point. } - -\begin{equation} - F^{ind} \leftarrow \sum_{t=1}^{T} \sum_{p=1}^{P} \left(W_{\theta}* \Theta_{t,p} + W_{U} * U_{t,p} \right) \label{eqf} -\end{equation} - - -\item \textcolor{red}{\textbf{Selection:} In order to generate a new generation, a portion of the existing population is elected based on a fitness function that ranks the fitness of each candidate solution and preferentially select the best solutions. Two parents should be selected to the mating pool. In the proposed GA-MuDiLCO algorithm, the first parent is selected by using binary tournament selection to select one of the parents \cite{goldberg1991comparative}. In this method, two individuals are chosen at random from the population and the better of the two -individuals is selected. If they have similar fitness values, one of them will be selected randomly. The best individual in the population is selected as a second parent.} - - - -\item \textcolor{red}{\textbf{Crossover:} Crossover is a genetic operator used to take more than one parent solutions and produce a child solution from them. If crossover probability $P_c$ is 100$\%$, then the crossover operation takes place between two individuals. If it is 0$\%$, the two selected individuals in the mating pool will be the new chromosomes without crossover. In the proposed GA, a two-point crossover is used. Figure \ref{cross} gives an example for a two-point crossover for 8 sensors in the subregion and the schedule for 3 rounds.} - - -\begin{figure}[h!] -\centering - \includegraphics [scale = 0.3] {crossover.pdf} -\caption{Two-point crossover. } -\label{cross} -\end{figure} - - -\item \textcolor{red}{\textbf{Mutation:} -Mutation is a divergence operation which introduces random modifications. The purpose of the mutation is to maintain diversity within the population and prevent premature convergence. Mutation is used to add new genetic information (divergence) in order to achieve a global search over the solution search space and avoid to fall in local optima. The mutation operator in the proposed GA-MuDiLCO works as follow: If mutation probability $P_m$ is 100$\%$, then the mutation operation takes place on the new individual. The round number is selected randomly within (1..T) in the schedule solution. After that one sensor within this round is selected randomly within (1..J). If the sensor is scheduled as active "1", it should be rescheduled to sleep "0". If the sensor is scheduled as sleep, it rescheduled to active only if it has adequate remaining energy.} - - -\item \textcolor{red}{\textbf{Update O-U-Coverage for children:} -Before evaluating each new individual, Algorithm \ref{OU} is called for each new individual to compute the new undercoverage $Ch.U$ and overcoverage $Ch.\Theta$ parameters. } - -\item \textcolor{red}{\textbf{Evaluate New Individuals:} -Each new individual is evaluated using Eq. \ref{eqf} but with using the new undercoverage $Ch.U$ and overcoverage $Ch.\Theta$ parameters of the new children.} - -\item \textcolor{red}{\textbf{Replacement:} -After evaluation of new children, Triple Tournament Replacement (TTR) will be applied for each new individual. In TTR strategy, three individuals are selected -randomly from the population. Find the worst from them and then check its fitness with the new individual fitness. If the fitness of the new individual is better than the fitness of the worst individual, replace the new individual with the worst individual. Otherwise, the replacement is not done. } - - -\item \textcolor{red}{\textbf{Stopping criteria:} -The proposed GA-MuDiLCO stops when the stopping criteria is met. It stops after running for an amount of time in seconds equal to \textbf{Time limit}. The \textbf{Time limit} is the execution time obtained by the optimization solver GLPK for solving the same size of problem. The best solution will be selected as a schedule of sensors for $T$ rounds during the sensing phase in the current period.} +The size of the problem depends on the number of variables and constraints. The +number of variables is linked to the number of alive sensors $A \subseteq J$, +the number of rounds $T$, and the number of primary points $P$. Thus the +integer program contains $A*T$ variables of type $X_{t,j}$, $P*T$ overcoverage +variables and $P*T$ undercoverage variables. The number of constraints is equal +to $P*T$ (for constraints (\ref{eq16})) $+$ $A$ (for constraints (\ref{eq144})). - -\end{enumerate} - -\fi - -\section{Experimental study} +\section{Experimental framework} \label{exp} + \subsection{Simulation setup} -We conducted a series of simulations to evaluate the efficiency and the -relevance of our approach, using the discrete event simulator OMNeT++ -\cite{varga}. The simulation parameters are summarized in -Table~\ref{table3}. Each experiment for a network is run over 25~different -random topologies and the results presented hereafter are the average of these -25 runs. -%Based on the results of our proposed work in~\cite{idrees2014coverage}, we found as the region of interest are divided into larger subregions as the network lifetime increased. In this simulation, the network are divided into 16 subregions. -We performed simulations for five different densities varying from 50 to -250~nodes deployed over a $50 \times 25~m^2 $ sensing field. More -precisely, the deployment is controlled at a coarse scale in order to ensure -that the deployed nodes can cover the sensing field with the given sensing -range. - -%%RC these parameters are realistic? -%% maybe we can increase the field and sensing range. 5mfor Rs it seems very small... what do the other good papers consider ? +We conducted a series of simulations to evaluate the efficiency and the +relevance of our approach, using the discrete event simulator OMNeT++ +\cite{varga}. The simulation parameters are summarized in Table~\ref{table3}. +Each experiment for a network is run over 25~different random topologies and the +results presented hereafter are the average of these 25 runs. We performed +simulations for five different densities varying from 50 to 250~nodes deployed +over a $50 \times 25~m^2 $ sensing field. More precisely, the deployment is +controlled at a coarse scale in order to ensure that the deployed nodes can +cover the sensing field with the given sensing range. \begin{table}[ht] \caption{Relevant parameters for network initializing.} -% title of Table \centering -% used for centering table \begin{tabular}{c|c} -% centered columns (4 columns) - \hline -%inserts double horizontal lines + \hline Parameter & Value \\ [0.5ex] - -%Case & Strategy (with Two Leaders) & Strategy (with One Leader) & Simple Heuristic \\ [0.5ex] -% inserts table -%heading \hline -% inserts single horizontal line Sensing field size & $(50 \times 25)~m^2 $ \\ -% inserting body of the table -%\hline Network size & 50, 100, 150, 200 and 250~nodes \\ -%\hline Initial energy & 500-700~joules \\ -%\hline Sensing time for one round & 60 Minutes \\ $E_{R}$ & 36 Joules\\ $R_s$ & 5~m \\ -%\hline $W_{\theta}$ & 1 \\ -% [1ex] adds vertical space -%\hline $W_{U}$ & $|P|^2$ \\ -%$P_c$ & 0.95 \\ -%$P_m$ & 0.6 \\ -%$S_{pop}$ & 50 -%inserts single line \end{tabular} \label{table3} -% is used to refer this table in the text -\end{table} - -\textcolor{green}{The MuDilLCO protocol is declined into four versions: MuDiLCO-1, MuDiLCO-3, MuDiLCO-5, -and MuDiLCO-7, corresponding respectively to $T=1,3,5,7$ ($T$ the number of rounds in one sensing period). Since the time resolution may be prohibitif when the size of the problem increases, a time limit treshold has been fixed to solve large instances. In these cases, the solver returns the best solution found, which is not necessary the optimal solution. - Table \ref{tl} shows time limit values. These time limit treshold have been set empirically. The basic idea consists in considering the average execution time to solve the integer programs to optimality, then by dividing this average time by three to set the threshold value. After that, this treshold value is increased if necessary such that the solver is able to deliver a feasible solution within the time limit. In fact, selecting the optimal values for the time limits will be investigated in future. In Table \ref{tl}, "NO" indicates that the problem has been solved to optimality without time limit. }. - -\begin{table}[ht] -\caption{Time limit values for MuDiLCO protocol versions } -\centering -\begin{tabular}{|c|c|c|c|c|} - \hline - WSN size & MuDiLCO-1 & MuDiLCO-3 & MuDiLCO-5 & MuDiLCO-7 \\ [0.5ex] -\hline - 50 & NO & NO & NO & NO \\ - \hline -100 & NO & NO & NO & NO \\ -\hline -150 & NO & NO & NO & 0.03 \\ -\hline -200 & NO & NO & NO & 0.06 \\ - \hline - 250 & NO & NO & NO & 0.08 \\ - \hline -\end{tabular} - -\label{tl} - \end{table} - - - - In the following, we will make comparisons with -two other methods. The first method, called DESK and proposed by \cite{ChinhVu}, -is a full distributed coverage algorithm. The second method, called -GAF~\cite{xu2001geography}, consists in dividing the region into fixed squares. -During the decision phase, in each square, one sensor is then chosen to remain -active during the sensing phase time. +Our protocol is declined into four versions: MuDiLCO-1, MuDiLCO-3, MuDiLCO-5, +and MuDiLCO-7, corresponding respectively to $T=1,3,5,7$ ($T$ the number of +rounds in one sensing period). Since the time resolution may be prohibitive when +the size of the problem increases, a time limit threshold has been fixed when +solving large instances. In these cases, the solver returns the best solution +found, which is not necessary the optimal one. In practice, we only set time +limit values for $T=5$ and $T=7$. In fact, for $T=5$ we limited the time for +250~nodes, whereas for $T=7$ it was for the three largest network sizes. +Therefore we used the following values (in second): 0.03 for 250~nodes when +$T=5$, while for $T=7$ we chose 0.03, 0.06, and 0.08 for respectively 150, 200, +and 250~nodes. These time limit thresholds have been set empirically. The basic +idea is to consider the average execution time to solve the integer programs to +optimality for 100 nodes and then to adjust the time linearly according to the +increasing network size. After that, this threshold value is increased if +necessary so that the solver is able to deliver a feasible solution within the +time limit. In fact, selecting the optimal values for the time limits will be +investigated in the future. + + In the following, we will make comparisons with two other methods. The first + method, called DESK and proposed by \cite{ChinhVu}, is a fully distributed + coverage algorithm. The second method, called GAF~\cite{xu2001geography}, + consists in dividing the region into fixed squares. During the decision phase, + in each square, one sensor is then chosen to remain active during the sensing + phase time. Some preliminary experiments were performed to study the choice of the number of -subregions which subdivides the sensing field, considering different network +subregions which subdivides the sensing field, considering different network sizes. They show that as the number of subregions increases, so does the network lifetime. Moreover, it makes the MuDiLCO protocol more robust against random -network disconnection due to node failures. However, too many subdivisions -reduce the advantage of the optimization. In fact, there is a balance between -the benefit from the optimization and the execution time needed to solve -it. Therefore, we have set the number of subregions to 16 rather than 32. +network disconnection due to node failures. However, too many subdivisions +reduce the advantage of the optimization. In fact, there is a balance between +the benefit from the optimization and the execution time needed to solve it. In +the following we have set the number of subregions to~16 \textcolor{blue}{as + recommended in~\cite{idrees2015distributed}}. \subsection{Energy model} - -We use an energy consumption model proposed by~\cite{ChinhVu} and based on -\cite{raghunathan2002energy} with slight modifications. The energy consumption -for sending/receiving the packets is added, whereas the part related to the -sensing range is removed because we consider a fixed sensing range. - -% We are took into account the energy consumption needed for the high computation during executing the algorithm on the sensor node. -%The new energy consumption model will take into account the energy consumption for communication (packet transmission/reception), the radio of the sensor node, data sensing, computational energy of Micro-Controller Unit (MCU) and high computation energy of MCU. -%revoir la phrase - -For our energy consumption model, we refer to the sensor node Medusa~II which -uses an Atmels AVR ATmega103L microcontroller~\cite{raghunathan2002energy}. The -typical architecture of a sensor is composed of four subsystems: the MCU -subsystem which is capable of computation, communication subsystem (radio) which -is responsible for transmitting/receiving messages, the sensing subsystem that -collects data, and the power supply which powers the complete sensor node -\cite{raghunathan2002energy}. Each of the first three subsystems can be turned -on or off depending on the current status of the sensor. Energy consumption -(expressed in milliWatt per second) for the different status of the sensor is -summarized in Table~\ref{table4}. - -\begin{table}[ht] -\caption{The Energy Consumption Model} -% title of Table -\centering -% used for centering table -\begin{tabular}{|c|c|c|c|c|} -% centered columns (4 columns) - \hline -%inserts double horizontal lines -Sensor status & MCU & Radio & Sensing & Power (mW) \\ [0.5ex] -\hline -% inserts single horizontal line -LISTENING & on & on & on & 20.05 \\ -% inserting body of the table -\hline -ACTIVE & on & off & on & 9.72 \\ -\hline -SLEEP & off & off & off & 0.02 \\ -\hline -COMPUTATION & on & on & on & 26.83 \\ -%\hline -%\multicolumn{4}{|c|}{Energy needed to send/receive a 1-bit} & 0.2575\\ - \hline -\end{tabular} - -\label{table4} -% is used to refer this table in the text -\end{table} - -For the sake of simplicity we ignore the energy needed to turn on the radio, to -start up the sensor node, to move from one status to another, etc. -%We also do not consider the need of collecting sensing data. PAS COMPRIS -Thus, when a sensor becomes active (i.e., it has already chosen its status), it can -turn its radio off to save battery. MuDiLCO uses two types of packets for -communication. The size of the INFO packet and Active-Sleep packet are 112~bits -and 24~bits respectively. The value of energy spent to send a 1-bit-content -message is obtained by using the equation in ~\cite{raghunathan2002energy} to -calculate the energy cost for transmitting messages and we propose the same -value for receiving the packets. The energy needed to send or receive a 1-bit -packet is equal to 0.2575~mW. - -The initial energy of each node is randomly set in the interval $[500;700]$. A -sensor node will not participate in the next round if its remaining energy is -less than $E_{R}=36~\mbox{Joules}$, the minimum energy needed for the node to -stay alive during one round. This value has been computed by multiplying the -energy consumed in active state (9.72 mW) by the time in second for one round -(3600 seconds). According to the interval of initial energy, a sensor may be -alive during at most 20 rounds. +\textcolor{blue}{The energy consumption model is detailed + in~\cite{raghunathan2002energy}. It is based on the model proposed + by~\cite{ChinhVu}. We refer to the sensor node Medusa~II which uses an Atmels + AVR ATmega103L microcontroller~\cite{raghunathan2002energy} to use numerical + values.} \subsection{Metrics} -To evaluate our approach we consider the following performance metrics: +\textcolor{blue}{To evaluate our approach we consider the performance metrics + detailed in~\cite{idrees2015distributed}, which are: Coverage Ratio, Network + Lifetime and Energy Consumption. Compared to the previous definitions, + formulations of Coverage Ratio and Energy Consumption are enriched with the + index of round $t$.} \begin{enumerate}[i] -\item {{\bf Coverage Ratio (CR)}:} the coverage ratio measures how much of the area - of a sensor field is covered. In our case, the sensing field is represented as - a connected grid of points and we use each grid point as a sample point to - compute the coverage. The coverage ratio can be calculated by: +\item {{\bf Coverage Ratio (CR)}:} the coverage ratio measures how much of the + area of a sensor field is covered. In our case, the sensing field is + represented as a connected grid of points and we use each grid point as a + sample point to compute the coverage. The coverage ratio can be calculated by: \begin{equation*} \scriptsize \mbox{CR}(\%) = \frac{\mbox{$n^t$}}{\mbox{$N$}} \times 100, \end{equation*} where $n^t$ is the number of covered grid points by the active sensors of all -subregions during round $t$ in the current sensing phase and $N$ is the total number -of grid points in the sensing field of the network. In our simulations $N = 51 -\times 26 = 1326$ grid points. -%The accuracy of this method depends on the distance between grids. In our -%simulations, the sensing field has been divided into 50 by 25 grid points, which means -%there are $51 \times 26~ = ~ 1326$ points in total. -% Therefore, for our simulations, the error in the coverage calculation is less than ~ 1 $\% $. +subregions during round $t$ in the current sensing phase and $N$ is the total +number of grid points in the sensing field of the network. In our simulations $N += 51 \times 26 = 1326$ grid points. \item{{\bf Number of Active Sensors Ratio (ASR)}:} it is important to have as - few active nodes as possible in each round, in order to minimize the - communication overhead and maximize the network lifetime. The Active Sensors + few active nodes as possible in each round, in order to minimize the + communication overhead and maximize the network lifetime. The Active Sensors Ratio is defined as follows: \begin{equation*} \scriptsize \mbox{ASR}(\%) = \frac{\sum\limits_{r=1}^R @@ -1289,101 +703,85 @@ $t$ in the current sensing phase, $|J|$ is the total number of sensors in the network, and $R$ is the total number of subregions in the network. \item {{\bf Network Lifetime}:} we define the network lifetime as the time until - the coverage ratio drops below a predefined threshold. We denote by - $Lifetime_{95}$ (respectively $Lifetime_{50}$) the amount of time during - which the network can satisfy an area coverage greater than $95\%$ - (respectively $50\%$). We assume that the network is alive until all nodes have - been drained of their energy or the sensor network becomes - disconnected. Network connectivity is important because an active sensor node - without connectivity towards a base station cannot transmit information on an - event in the area that it monitors. + the coverage ratio drops below a predefined threshold. We denote by + $Lifetime_{95}$ (respectively $Lifetime_{50}$) the amount of time during which + the network can satisfy an area coverage greater than $95\%$ (respectively + $50\%$). We assume that the network is alive until all nodes have been drained + of their energy or the sensor network becomes disconnected. Network + connectivity is important because an active sensor node without connectivity + towards a base station cannot transmit information on an event in the area + that it monitors. \item {{\bf Energy Consumption (EC)}:} the average energy consumption can be seen as the total energy consumed by the sensors during the $Lifetime_{95}$ or - $Lifetime_{50}$ divided by the number of rounds. EC can be computed as + $Lifetime_{50}$ divided by the number of rounds. EC can be computed as follows: - % New version with global loops on period \begin{equation*} \scriptsize \mbox{EC} = \frac{\sum\limits_{m=1}^{M} \left[ \left( E^{\mbox{com}}_m+E^{\mbox{list}}_m+E^{\mbox{comp}}_m \right) +\sum\limits_{t=1}^{T_m} \left( E^{a}_t+E^{s}_t \right) \right]}{\sum\limits_{m=1}^{M} T_m}, \end{equation*} - -% Old version with loop on round outside the loop on period -% \begin{equation*} -% \scriptsize -% \mbox{EC} = \frac{\sum\limits_{m=1}^{M_L} \left( E^{\mbox{com}}_m+E^{\mbox{list}}_m+E^{\mbox{comp}}_m \right) +\sum\limits_{t=1}^{T_L} \left( E^{a}_t+E^{s}_t \right)}{T_L}, -% \end{equation*} - -% Ali version -%\begin{equation*} -%\scriptsize -%\mbox{EC} = \frac{\mbox{$\sum\limits_{d=1}^D E^c_d$}}{\mbox{$D$}} + \frac{\mbox{$\sum\limits_{d=1}^D %E^l_d$}}{\mbox{$D$}} + \frac{\mbox{$\sum\limits_{d=1}^D E^a_d$}}{\mbox{$D$}} + %\frac{\mbox{$\sum\limits_{d=1}^D E^s_d$}}{\mbox{$D$}}. -%\end{equation*} - -% Old version -> where $M_L$ and $T_L$ are respectively the number of periods and rounds during -%$Lifetime_{95}$ or $Lifetime_{50}$. -% New version -where $M$ is the number of periods and $T_m$ the number of rounds in a +where $M$ is the number of periods and $T_m$ the number of rounds in a period~$m$, both during $Lifetime_{95}$ or $Lifetime_{50}$. The total energy -consumed by the sensors (EC) comes through taking into consideration four main +consumed by the sensors (EC) comes through taking into consideration four main energy factors. The first one , denoted $E^{\scriptsize \mbox{com}}_m$, -represents the energy consumption spent by all the nodes for wireless -communications during period $m$. $E^{\scriptsize \mbox{list}}_m$, the next -factor, corresponds to the energy consumed by the sensors in LISTENING status -before receiving the decision to go active or sleep in period $m$. +represents the energy consumption spent by all the nodes for wireless +communications during period $m$. $E^{\scriptsize \mbox{list}}_m$, the next +factor, corresponds to the energy consumed by the sensors in LISTENING status +before receiving the decision to go active or sleep in period $m$. $E^{\scriptsize \mbox{comp}}_m$ refers to the energy needed by all the leader nodes to solve the integer program during a period. Finally, $E^a_t$ and $E^s_t$ indicate the energy consumed by the whole network in round $t$. %\item {Network Lifetime:} we have defined the network lifetime as the time until all %nodes have been drained of their energy or each sensor network monitoring an area has become disconnected. +\end{enumerate} -\item {{\bf Execution Time}:} a sensor node has limited energy resources and - computing power, therefore it is important that the proposed algorithm has the - shortest possible execution time. The energy of a sensor node must be mainly - used for the sensing phase, not for the pre-sensing ones. - -\item {{\bf Stopped simulation runs}:} a simulation ends when the sensor network - becomes disconnected (some nodes are dead and are not able to send information - to the base station). We report the number of simulations that are stopped due - to network disconnections and for which round it occurs. -\end{enumerate} -\section{Results and analysis} -\subsection{Performance Analysis for Different Number of Primary Points} -\label{ch4:sec:04:06} +\section{Experimental results and analysis} +\label{analysis} -In this section, we study the performance of MuDiLCO-1 approach for different numbers of primary points. The objective of this comparison is to select the suitable primary point model to be used by a MuDiLCO protocol. In this comparison, MuDiLCO-1 protocol is used with five models, which are called Model-5 (it uses 5 primary points), Model-9, Model-13, Model-17, and Model-21. +\subsection{Performance analysis for different number of primary points} +\label{ch4:sec:04:06} +In this section, we study the performance of MuDiLCO-1 approach (with only one +round as in~\cite{idrees2015distributed}) for different numbers of primary +points. The objective of this comparison is to select the suitable number of +primary points to be used by a MuDiLCO protocol. In this comparison, MuDiLCO-1 +protocol is used with five primary point models, each model corresponding to a +number of primary points, which are called Model-5 (it uses 5 primary points), +Model-9, Model-13, Model-17, and Model-21. \textcolor{blue}{Note + that the results + presented in~\cite{idrees2015distributed} correspond to Model-13 (13 primary + points)}. -%\begin{enumerate}[i)] +\subsubsection{Coverage ratio} -%\item {{\bf Coverage Ratio}} -\subsubsection{Coverage Ratio} +Figure~\ref{Figures/ch4/R2/CR} shows the average coverage ratio for 150 deployed +nodes. As can be seen, at the beginning the models which use a larger number of +primary points provide slightly better coverage ratios, but latter they are the +worst. Moreover, when the number of periods increases, the coverage ratio +produced by all models decrease due to dead nodes. However, Model-5 is the one +with the slowest decrease due to lower numbers of active sensors in the earlier +periods. Overall this model is slightly more efficient than the other ones, +because it offers a good coverage ratio for a larger number of periods. -Figure~\ref{Figures/ch4/R2/CR} shows the average coverage ratio for 150 deployed nodes. -\parskip 0pt -\begin{figure}[h!] +\begin{figure}[t!] \centering \includegraphics[scale=0.5] {R2/CR.pdf} \caption{Coverage ratio for 150 deployed nodes} \label{Figures/ch4/R2/CR} \end{figure} -As can be seen in Figure~\ref{Figures/ch4/R2/CR}, at the beginning the models which use a larger number of primary points provide slightly better coverage ratios, but latter they are the worst. -%Moreover, when the number of periods increases, coverage ratio produced by Model-9, Model-13, Model-17, and Model-21 decreases in comparison with Model-5 due to a larger time computation for the decision process for larger number of primary points. -Moreover, when the number of periods increases, coverage ratio produced by all models decrease, but Model-5 is the one with the slowest decrease due to a smaller time computation of decision process for a smaller number of primary points. -As shown in Figure ~\ref{Figures/ch4/R2/CR}, coverage ratio decreases when the number of periods increases due to dead nodes. Model-5 is slightly more efficient than other models, because it offers a good coverage ratio for a larger number of periods in comparison with other models. - -%\item {{\bf Network Lifetime}} -\subsubsection{Network Lifetime} +\subsubsection{Network lifetime} -Finally, we study the effect of increasing the primary points on the lifetime of the network. -%In Figure~\ref{Figures/ch4/R2/LT95} and in Figure~\ref{Figures/ch4/R2/LT50}, network lifetime, $Lifetime95$ and $Lifetime50$ respectively, are illustrated for different network sizes. -As highlighted by Figures~\ref{Figures/ch4/R2/LT}(a) and \ref{Figures/ch4/R2/LT}(b), the network lifetime obviously increases when the size of the network increases, with Model-5 that leads to the larger lifetime improvement. +Finally, we study the effect of increasing the number of primary points on the +lifetime of the network. As highlighted by Figures~\ref{Figures/ch4/R2/LT}(a) +and \ref{Figures/ch4/R2/LT}(b), the network lifetime obviously increases when +the size of the network increases, with Model-5 which leads to the largest +lifetime improvement. \begin{figure}[h!] \centering @@ -1396,31 +794,28 @@ As highlighted by Figures~\ref{Figures/ch4/R2/LT}(a) and \ref{Figures/ch4/R2/LT} \label{Figures/ch4/R2/LT} \end{figure} -Comparison shows that Model-5, which uses less number of primary points, is the best one because it is less energy consuming during the network lifetime. It is also the better one from the point of view of coverage ratio. Our proposed Model-5 efficiently prolongs the network lifetime with a good coverage ratio in comparison with other models. Therefore, we have chosen the model with five primary points for all the experiments presented thereafter. - -%\end{enumerate} +Comparison shows that Model-5, which uses less number of primary points, is the +best one because it is less energy consuming during the network lifetime. It is +also the better one from the point of view of coverage ratio, as stated +before. Therefore, we have chosen the model with five primary points for all the +experiments presented thereafter. - -\subsection{Comparison Results} - -\subsubsection{Coverage ratio} +\subsection{Coverage ratio} Figure~\ref{fig3} shows the average coverage ratio for 150 deployed nodes. We -can notice that for the first thirty rounds both DESK and GAF provide a coverage -which is a little bit better than the one of MuDiLCO. -%%RC : need to uniformize MuDiLCO or MuDiLCO-T? -%%MS : MuDiLCO everywhere -%%RC maybe increase the size of the figure for the reviewers, no? -This is due to the fact that, in comparison with MuDiLCO which uses optimization -to put in SLEEP status redundant sensors, more sensor nodes remain active with -DESK and GAF. As a consequence, when the number of rounds increases, a larger -number of node failures can be observed in DESK and GAF, resulting in a faster -decrease of the coverage ratio. Furthermore, our protocol allows to maintain a -coverage ratio greater than 50\% for far more rounds. Overall, the proposed -sensor activity scheduling based on optimization in MuDiLCO maintains higher -coverage ratios of the area of interest for a larger number of rounds. It also -means that MuDiLCO saves more energy, with less dead nodes, at most for several -rounds, and thus should extend the network lifetime. +can notice that for the first 30~rounds both DESK and GAF provide a coverage +which is a little bit better than the one of MuDiLCO. This is due to the fact +that, in comparison with MuDiLCO which uses optimization to put in SLEEP status +redundant sensors, more sensor nodes remain active with DESK and GAF. As a +consequence, when the number of rounds increases, a larger number of node +failures can be observed in DESK and GAF, resulting in a faster decrease of the +coverage ratio. Furthermore, our protocol allows to maintain a coverage ratio +greater than 50\% for far more rounds. Overall, the proposed sensor activity +scheduling based on optimization in MuDiLCO maintains higher coverage ratios of +the area of interest for a larger number of rounds. It also means that MuDiLCO +saves more energy, with less dead nodes, at most for several rounds, and thus +should extend the network lifetime. MuDiLCO-7 seems to have most of the time +the best coverage ratio up to round~80, after that MuDiLCO-5 is slightly better. \begin{figure}[ht!] \centering @@ -1429,24 +824,17 @@ rounds, and thus should extend the network lifetime. \label{fig3} \end{figure} -\iffalse -\textcolor{red}{ We -can see that for the first thirty nine rounds GA-MuDiLCO provides a little bit better coverage ratio than MuDiLCO. Both DESK and GAF provide a coverage -which is a little bit better than the one of MuDiLCO and GA-MuDiLCO for the first thirty rounds because they activate a larger number of nodes during sensing phase. After that GA-MuDiLCO provides a coverage ratio near to the MuDiLCO and better than DESK and GAF. GA-MuDiLCO gives approximate solution with activation a larger number of nodes than MuDiLCO during sensing phase while it activates a less number of nodes in comparison with both DESK and GAF. MuDiLCO and GA-MuDiLCO clearly outperform DESK and GAF for -a number of periods between 31 and 103. This is because they optimize the coverage and the lifetime in a wireless sensor network by selecting the best representative sensor nodes to take the responsibility of coverage during the sensing phase.} -\fi - - -\subsubsection{Active sensors ratio} +\subsection{Active sensors ratio} It is crucial to have as few active nodes as possible in each round, in order to -minimize the communication overhead and maximize the network lifetime. Figure~\ref{fig4} presents the active sensor ratio for 150 deployed +minimize the communication overhead and maximize the network +lifetime. Figure~\ref{fig4} presents the active sensor ratio for 150 deployed nodes all along the network lifetime. It appears that up to round thirteen, DESK and GAF have respectively 37.6\% and 44.8\% of nodes in ACTIVE status, whereas -MuDiLCO clearly outperforms them with only 24.8\% of active nodes. -%\textcolor{red}{GA-MuDiLCO activates a number of sensor nodes larger than MuDiLCO but lower than both DESK and GAF. GA-MuDiLCO-1, GA-MuDiLCO-3, and GA-MuDiLCO-5 continue in providing a larger number of active sensors until the forty-sixth round after that it provides less number of active nodes due to the died nodes. GA-MuDiLCO-7 provides a larger number of sensor nodes and maintains a better coverage ratio compared to MuDiLCO-7 until the fifty-seventh round. After the thirty-fifth round, MuDiLCO exhibits larger numbers of active nodes compared with DESK and GAF, which agrees with the dual observation of higher level of coverage made previously}. -Obviously, in that case DESK and GAF have less active nodes, since they have activated many nodes at the beginning. Anyway, MuDiLCO activates the available nodes in a more efficient manner. -%\textcolor{red}{GA-MuDiLCO activates near optimal number of sensor nodes also in efficient manner compared with both DESK and GAF}. +MuDiLCO clearly outperforms them with only 24.8\% of active nodes. Obviously, +in that case DESK and GAF have less active nodes, since they have activated many +nodes at the beginning. Anyway, MuDiLCO activates the available nodes in a more +efficient manner. \begin{figure}[ht!] \centering @@ -1455,31 +843,27 @@ Obviously, in that case DESK and GAF have less active nodes, since they have a \label{fig4} \end{figure} -%\textcolor{red}{GA-MuDiLCO activates a sensor nodes larger than MuDiLCO but lower than both DESK and GAF } - - -\subsubsection{Stopped simulation runs} -%The results presented in this experiment, is to show the comparison of our MuDiLCO protocol with other two approaches from the point of view the stopped simulation runs per round. Figure~\ref{fig6} illustrates the percentage of stopped simulation -%runs per round for 150 deployed nodes. - -Figure~\ref{fig6} reports the cumulative percentage of stopped simulations runs -per round for 150 deployed nodes. This figure gives the breakpoint for each method. DESK stops first, after approximately 45~rounds, because it consumes the -more energy by turning on a large number of redundant nodes during the sensing -phase. GAF stops secondly for the same reason than DESK. -%\textcolor{red}{GA-MuDiLCO stops thirdly for the same reason than DESK and GAF.} \textcolor{red}{MuDiLCO and GA-MuDiLCO overcome} -%DESK and GAF because \textcolor{red}{they activate less number of sensor nodes, as well as }the optimization process distributed on several subregions leads to coverage preservation and so extends the network lifetime. -Let us emphasize that the simulation continues as long as a network in a subregion is still connected. +\subsection{Stopped simulation runs} -%%% The optimization effectively continues as long as a network in a subregion is still connected. A VOIR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +A simulation ends when the sensor network becomes disconnected (some nodes are +dead and are not able to send information to the base station). We report the +number of simulations that are stopped due to network disconnections and for +which round it occurs. Figure~\ref{fig6} reports the cumulative percentage of +stopped simulations runs per round for 150 deployed nodes. This figure gives +the break point for each method. DESK stops first, after approximately +45~rounds, because it consumes the more energy by turning on a large number of +redundant nodes during the sensing phase. GAF stops secondly for the same reason +than DESK. Let us emphasize that the simulation continues as long as a network +in a subregion is still connected. \begin{figure}[ht!] \centering \includegraphics[scale=0.5]{F/SR.pdf} -\caption{Cumulative percentage of stopped simulation runs for 150 deployed nodes } +\caption{Cumulative percentage of stopped simulation runs for 150 deployed nodes} \label{fig6} \end{figure} -\subsubsection{Energy consumption} \label{subsec:EC} +\subsection{Energy consumption} \label{subsec:EC} We measure the energy consumed by the sensors during the communication, listening, computation, active, and sleep status for different network densities @@ -1500,28 +884,34 @@ network sizes, for $Lifetime_{95}$ and $Lifetime_{50}$. \end{figure} The results show that MuDiLCO is the most competitive from the energy -consumption point of view. The other approaches have a high energy consumption -due to activating a larger number of redundant nodes as well as the energy consumed during the different status of the sensor node. -% Among the different versions of our protocol, the MuDiLCO-7 one consumes more energy than the other -%versions. This is easy to understand since the bigger the number of rounds and the number of sensors involved in the integer program are, the larger the time computation to solve the optimization problem is. To improve the performances of MuDiLCO-7, we should increase the number of subregions in order to have less sensors to consider in the integer program. -%\textcolor{red}{As shown in Figure~\ref{fig7}, GA-MuDiLCO consumes less energy than both DESK and GAF, but a little bit higher than MuDiLCO because it provides a near optimal solution by activating a larger number of nodes during the sensing phase. GA-MuDiLCO consumes less energy in comparison with MuDiLCO-7 version, especially for the dense networks. However, MuDiLCO protocol and GA-MuDiLCO protocol are the most competitive from the energy -%consumption point of view. The other approaches have a high energy consumption -%due to activating a larger number of redundant nodes.} -%In fact, a distributed optimization decision, which produces T rounds, on the subregions is greatly reduced the cost of communications and the time of listening as well as the energy needed for sensing phase and computation so thanks to the partitioning of the initial network into several independent subnetworks and producing T rounds for each subregion periodically. - - -\subsubsection{Execution time} +consumption point of view. The other approaches have a high energy consumption +due to activating a larger number of redundant nodes as well as the energy +consumed during the different status of the sensor node. + +Energy consumption increases with the size of the networks and the number of +rounds. The curve Unlimited-MuDiLCO-7 shows that energy consumption due to the +time spent to optimally solve the integer program increases drastically with the +size of the network. When the resolution time is limited for large network +sizes, the energy consumption remains of the same order whatever the MuDiLCO +version. As can be seen with MuDiLCO-7. + +\subsection{Execution time} \label{et} -We observe the impact of the network size and of the number of rounds on the + +We observe the impact of the network size and of the number of rounds on the computation time. Figure~\ref{fig77} gives the average execution times in -seconds (needed to solve optimization problem) for different values of $T$. The modeling language for Mathematical Programming (AMPL)~\cite{AMPL} is employed to generate the Mixed Integer Linear Program instance in a standard format, which is then read and solved by the optimization solver GLPK (GNU linear Programming Kit available in the public domain) \cite{glpk} through a Branch-and-Bound method. The -original execution time is computed on a laptop DELL with Intel Core~i3~2370~M -(2.4 GHz) processor (2 cores) and the MIPS (Million Instructions Per Second) -rate equal to 35330. To be consistent with the use of a sensor node with Atmels -AVR ATmega103L microcontroller (6 MHz) and a MIPS rate equal to 6 to run the -optimization resolution, this time is multiplied by 2944.2 $\left( -\frac{35330}{2} \times \frac{1}{6} \right)$ and reported on Figure~\ref{fig77} -for different network sizes. +seconds (needed to solve the optimization problem) for different values of +$T$. The modeling language for Mathematical Programming (AMPL)~\cite{AMPL} is +employed to generate the Mixed Integer Linear Program instance in a standard +format, which is then read and solved by the optimization solver GLPK (GNU +linear Programming Kit available in the public domain) \cite{glpk} through a +Branch-and-Bound method. The original execution time is computed on a laptop +DELL with Intel Core~i3~2370~M (2.4 GHz) processor (2 cores) and the MIPS +(Million Instructions Per Second) rate equal to 35330. To be consistent with the +use of a sensor node with Atmels AVR ATmega103L microcontroller (6 MHz) and a +MIPS rate equal to 6 to run the optimization resolution, this time is multiplied +by 2944.2 $\left( \frac{35330}{2} \times \frac{1}{6} \right)$ and reported on +Figure~\ref{fig77} for different network sizes. \begin{figure}[ht!] \centering @@ -1531,98 +921,96 @@ for different network sizes. \end{figure} As expected, the execution time increases with the number of rounds $T$ taken -into account to schedule the sensing phase. The times obtained for $T=1,3$ -or $5$ seem bearable, but for $T=7$ they become quickly unsuitable for a sensor -node, especially when the sensor network size increases. Again, we can notice -that if we want to schedule the nodes activities for a large number of rounds, -we need to choose a relevant number of subregions in order to avoid a complicated -and cumbersome optimization. On the one hand, a large value for $T$ permits to -reduce the energy-overhead due to the three pre-sensing phases, on the other -hand a leader node may waste a considerable amount of energy to solve the -optimization problem. - -%While MuDiLCO-1, 3, and 5 solves the optimization process with suitable execution times to be used on wireless sensor network because it distributed on larger number of small subregions as well as it is used acceptable number of round(s) T. We think that in distributed fashion the solving of the optimization problem to produce T rounds in a subregion can be tackled by sensor nodes. Overall, to be able to deal with very large networks, a distributed method is clearly required. - -\subsubsection{Network lifetime} +into account to schedule the sensing phase. Obviously, the number of variables +and constraints of the integer program increases with $T$, as explained in +section~\ref{mom}, the times obtained for $T=1,3$ or $5$ seem bearable. But for +$T=7$, without any limitation of the time, they become quickly unsuitable for a +sensor node, especially when the sensor network size increases as demonstrated +by Unlimited-MuDiLCO-7. Notice that for 250 nodes, we also limited the +execution time for $T=5$, otherwise the execution time, denoted by +Unlimited-MuDiLCO-5, is also above MuDiLCO-7. On the one hand, a large value +for $T$ permits to reduce the energy-overhead due to the three pre-sensing +phases, on the other hand a leader node may waste a considerable amount of +energy to solve the optimization problem. Thus, limiting the time resolution for +large instances allows to reduce the energy consumption without any impact on +the coverage quality. + +\subsection{Network lifetime} The next two figures, Figures~\ref{fig8}(a) and \ref{fig8}(b), illustrate the network lifetime for different network sizes, respectively for $Lifetime_{95}$ -and $Lifetime_{50}$. Both figures show that the network lifetime increases +and $Lifetime_{50}$. Both figures show that the network lifetime increases together with the number of sensor nodes, whatever the protocol, thanks to the -node density which results in more and more redundant nodes that can be +node density which results in more and more redundant nodes that can be deactivated and thus save energy. Compared to the other approaches, our MuDiLCO -protocol maximizes the lifetime of the network. In particular the gain in -lifetime for a coverage over 95\% is greater than 38\% when switching from GAF -to MuDiLCO-3. The slight decrease that can be observed for MuDiLCO-7 in case -of $Lifetime_{95}$ with large wireless sensor networks results from the -difficulty of the optimization problem to be solved by the integer program. -This point was already noticed in subsection \ref{subsec:EC} devoted to the -energy consumption, since network lifetime and energy consumption are directly -linked. -%\textcolor{red}{As can be seen in these figures, the lifetime increases with the size of the network, and it is clearly largest for the MuDiLCO -%and the GA-MuDiLCO protocols. GA-MuDiLCO prolongs the network lifetime obviously in comparison with both DESK and GAF, as well as the MuDiLCO-7 version for $lifetime_{95}$. However, comparison shows that MuDiLCO protocol and GA-MuDiLCO protocol, which use distributed optimization over the subregions are the best ones because they are robust to network disconnection during the network lifetime as well as they consume less energy in comparison with other approaches.} +protocol maximizes the lifetime of the network. In particular the gain in +lifetime for a coverage over 95\%, and a network of 250~nodes, is greater than +43\% when switching from GAF to MuDiLCO-5. +%The lower performance that can be observed for MuDiLCO-7 in case +%of $Lifetime_{95}$ with large wireless sensor networks results from the +%difficulty of the optimization problem to be solved by the integer program. +%This point was already noticed in subsection \ref{subsec:EC} devoted to the +%energy consumption, since network lifetime and energy consumption are directly +%linked. +Overall, it clearly appears that computing a scheduling for several rounds is +possible and relevant, providing that the execution time to solve the +optimization problem for large instances is limited. Notice that rather than +limiting the execution time, similar results might be obtained by replacing the +computation of the exact solution with the finding of a suboptimal one using a +heuristic approach. For our simulation setup and considering the different +metrics, MuDiLCO-5 seems to be the best suited method compared to MuDiLCO-7. + \begin{figure}[t!] \centering \begin{tabular}{cl} - \parbox{9.5cm}{\includegraphics[scale=0.5]{F/LT95.pdf}} & (a) \\ + \parbox{9.5cm}{\includegraphics[scale=0.5125]{F/LT95.pdf}} & (a) \\ \verb+ + \\ - \parbox{9.5cm}{\includegraphics[scale=0.5]{F/LT50.pdf}} & (b) + \parbox{9.5cm}{\includegraphics[scale=0.5125]{F/LT50.pdf}} & (b) \end{tabular} \caption{Network lifetime for (a) $Lifetime_{95}$ and (b) $Lifetime_{50}$} \label{fig8} \end{figure} -% By choosing the best suited nodes, for each round, by optimizing the coverage and lifetime of the network to cover the area of interest with a maximum number rounds and by letting the other nodes sleep in order to be used later in next rounds, our MuDiLCO protocol efficiently prolonges the network lifetime. - -%In Figure~\ref{fig8}, Comparison shows that our MuDiLCO protocol, which are used distributed optimization on the subregions with the ability of producing T rounds, is the best one because it is robust to network disconnection during the network lifetime as well as it consume less energy in comparison with other approaches. It also means that distributing the protocol in each sensor node and subdividing the sensing field into many subregions, which are managed independently and simultaneously, is the most relevant way to maximize the lifetime of a network. - - -%We see that our MuDiLCO-7 protocol results in execution times that quickly become unsuitable for a sensor network as well as the energy consumption seems to be huge because it used a larger number of rounds T during performing the optimization decision in the subregions, which is led to decrease the network lifetime. On the other side, our MuDiLCO-1, 3, and 5 protocol seems to be more efficient in comparison with other approaches because they are prolonged the lifetime of the network more than DESK and GAF. - - \section{Conclusion and future works} \label{sec:conclusion} -We have addressed the problem of the coverage and of the lifetime optimization in -wireless sensor networks. This is a key issue as sensor nodes have limited +We have addressed the problem of the coverage and of the lifetime optimization +in wireless sensor networks. This is a key issue as sensor nodes have limited resources in terms of memory, energy, and computational power. To cope with this -problem, the field of sensing is divided into smaller subregions using the +problem, the field of sensing is divided into smaller subregions using the concept of divide-and-conquer method, and then we propose a protocol which -optimizes coverage and lifetime performances in each subregion. Our protocol, -called MuDiLCO (Multiround Distributed Lifetime Coverage Optimization) combines +optimizes coverage and lifetime performances in each subregion. Our protocol, +called MuDiLCO (Multiround Distributed Lifetime Coverage Optimization) combines two efficient techniques: network leader election and sensor activity -scheduling. -%, where the challenges -%include how to select the most efficient leader in each subregion and -%the best cover sets %of active nodes that will optimize the network lifetime -%while taking the responsibility of covering the corresponding -%subregion using more than one cover set during the sensing phase. -The activity scheduling in each subregion works in periods, where each period -consists of four phases: (i) Information Exchange, (ii) Leader Election, (iii) -Decision Phase to plan the activity of the sensors over $T$ rounds, (iv) Sensing -Phase itself divided into $T$ rounds. - -Simulations results show the relevance of the proposed protocol in terms of +scheduling. The activity scheduling in each subregion works in periods, where +each period consists of four phases: (i) Information Exchange, (ii) Leader +Election, (iii) Decision Phase to plan the activity of the sensors over $T$ +rounds, (iv) Sensing Phase itself divided into $T$ rounds. + +Simulations results show the relevance of the proposed protocol in terms of lifetime, coverage ratio, active sensors ratio, energy consumption, execution time. Indeed, when dealing with large wireless sensor networks, a distributed -approach, like the one we propose, allows to reduce the difficulty of a single +approach, like the one we propose, allows to reduce the difficulty of a single global optimization problem by partitioning it in many smaller problems, one per -subregion, that can be solved more easily. Nevertheless, results also show that -it is not possible to plan the activity of sensors over too many rounds, because -the resulting optimization problem leads to too high resolution times and thus to -an excessive energy consumption. +subregion, that can be solved more easily. Furthermore, results also show that +to plan the activity of sensors for large network sizes, an approach to obtain a +near optimal solution is needed. Indeed, an exact resolution of the resulting +optimization problem leads to prohibitive computation times and thus to an +excessive energy consumption. %In future work, we plan to study and propose adjustable sensing range coverage optimization protocol, which computes all active sensor schedules in one time, by using %optimization methods. This protocol can prolong the network lifetime by minimizing the number of the active sensor nodes near the borders by optimizing the sensing range of sensor nodes. % use section* for acknowledgement \section*{Acknowledgment} -This work is partially funded by the Labex ACTION program (contract ANR-11-LABX-01-01). -As a Ph.D. student, Ali Kadhum IDREES would like to gratefully acknowledge the -University of Babylon - Iraq for the financial support, Campus France (The -French national agency for the promotion of higher education, international -student services, and international mobility).%, and the University ofFranche-Comt\'e - France for all the support in France. +This work is partially funded by the Labex ACTION program (contract +ANR-11-LABX-01-01). Ali Kadhum IDREES would like to gratefully acknowledge the +University of Babylon - Iraq for the financial support and Campus France (The +French national agency for the promotion of higher education, international +student services, and international mobility) for the support received when he +was Ph.D. student in France. +%, and the University ofFranche-Comt\'e - France for all the support in France.