-
\documentclass[conference]{IEEEtran}
\ifCLASSINFOpdf
\hyphenation{op-tical net-works semi-conduc-tor}
-\usepackage{float}
+\usepackage{float}
\usepackage{epsfig}
\usepackage{calc}
\usepackage{times,amssymb,amsmath,latexsym}
\usepackage{epsfig}
\usepackage{caption}
\usepackage{multicol}
-
+\usepackage{times}
+\usepackage{graphicx,epstopdf}
+\epstopdfsetup{suffix=}
+\DeclareGraphicsExtensions{.ps}
+\DeclareGraphicsRule{.ps}{pdf}{.pdf}{`ps2pdf -dEPSCrop -dNOSAFER #1 \noexpand\OutputFile}
\begin{document}
-\title{Energy-Efficient Activity Scheduling in Heterogeneous Energy Wireless Sensor Networks}
-
+%\title{ Coverage and Lifetime Optimization in Heterogeneous Energy Wireless Sensor Networks}
+\title{Coverage and Lifetime Optimization in Heterogeneous Energy Wireless Sensor Networks}
+%Activity Scheduling for Coverage and Lifetime Optimization in Wireless Sensor Networks}
% author names and affiliations
% use a multiple column layout for up to three different
% affiliations
-\author{\IEEEauthorblockN{Ali Kadhum Idrees, Karine Deschinkel, Michel Salomon, and Rapha\"el Couturier }
-\IEEEauthorblockA{FEMTO-ST Institute, UMR 6174 CNRS, University of Franche-Comte, Belfort, France \\
+\author{\IEEEauthorblockN{Ali Kadhum Idrees, Karine Deschinkel, Michel Salomon, and Rapha\"el Couturier}
+\IEEEauthorblockA{FEMTO-ST Institute, UMR 6174 CNRS \\
+University of Franche-Comt\'e \\
+Belfort, France \\
Email: ali.idness@edu.univ-fcomte.fr, $\lbrace$karine.deschinkel, michel.salomon, raphael.couturier$\rbrace$@univ-fcomte.fr}
%\email{\{ali.idness, karine.deschinkel, michel.salomon, raphael.couturier\}@univ-fcomte.fr}
%\and
\begin{abstract}
One of the fundamental challenges in Wireless Sensor Networks (WSNs)
-is coverage preservation and extension of the network lifetime
+is the coverage preservation and the extension of the network lifetime
continuously and effectively when monitoring a certain area (or
-region) of interest. In this paper a coverage optimization protocol to
+region) of interest. In this paper, a coverage optimization protocol to
improve the lifetime in heterogeneous energy wireless sensor networks
is proposed. The area of interest is first divided into subregions
-using a divide-and-conquer method and then scheduling of sensor node
+using a divide-and-conquer method and then the scheduling of sensor node
activity is planned for each subregion. The proposed scheduling
considers rounds during which a small number of nodes, remaining
active for sensing, is selected to ensure coverage. Each round
consists of four phases: (i)~Information Exchange, (ii)~Leader
Election, (iii)~Decision, and (iv)~Sensing. The decision process is
-carried out by a leader node which solves an integer program.
+carried out by a leader node, which solves an integer program.
Simulation results show that the proposed approach can prolong the
network lifetime and improve the coverage performance.
\end{abstract}
-%\keywords{Area Coverage, Wireless Sensor Networks, lifetime Optimization, Distributed Protocol.}
+\begin{IEEEkeywords}
+Area Coverage, Network lifetime, Optimization, Scheduling, Distributed Protocol.
+\end{IEEEkeywords}
+%\keywords{Area Coverage, Network lifetime, Optimization, Distributed Protocol}
\IEEEpeerreviewmaketitle
\section{Introduction}
-\noindent Recent years have witnessed significant advances in wireless
-communications and embedded micro-sensing MEMS technologies which have
-made emerge wireless sensor networks as one of the most promising
-technologies~\cite{asc02}. In fact, they present huge potential in
-several domains ranging from health care applications to military
-applications. A sensor network is composed of a large number of tiny
-sensing devices deployed in a region of interest. Each device has
-processing and wireless communication capabilities, which enable to
-sense its environment, to compute, to store information and to deliver
-report messages to a base station.
-%These sensor nodes run on batteries with limited capacities. To achieve a long life of the network, it is important to conserve battery power. Therefore, lifetime optimisation is one of the most critical issues in wireless sensor networks.
-One of the main design issues in Wireless Sensor Networks (WSNs) is to
-prolong the network lifetime, while achieving acceptable quality of
-service for applications. Indeed, sensor nodes have limited resources
-in terms of memory, energy and computational power.
-
-Since sensor nodes have limited battery life and without being able to
-replace batteries, especially in remote and hostile environments, it
-is desirable that a WSN should be deployed with high density because
-spatial redundancy can then be exploited to increase the lifetime of
-the network. In such a high density network, if all sensor nodes were
-to be activated at the same time, the lifetime would be reduced. To
-extend the lifetime of the network, the main idea is to take benefit
-from the overlapping sensing regions of some sensor nodes to save
-energy by turning off some of them during the sensing phase.
-Obviously, the deactivation of nodes is only relevant if the coverage
-of the monitored area is not affected. Consequently, future software
-may need to adapt appropriately to achieve acceptable quality of
-service for applications. In this paper we concentrate on area
+
+\noindent The fast developments in the low-cost sensor devices and wireless communications have allowed the emergence the WSNs. WSN includes a large number of small , limited-power sensors that can sense, process and transmit
+ data over a wireless communication . They communicate with each other by using multi-hop wireless communications , cooperate together to monitor the area of interest, and the measured data can be reported
+ to a monitoring center
+called, sink, for analysis it~\cite{Ammari01, Sudip03}. There are several applications used the WSN including health, home, environmental, military,and industrial applications~\cite{Akyildiz02}.
+The coverage problem is one of the fundamental challenges in WSNs~\cite{Nayak04} that consists in monitoring efficiently and continuously the area of interest. The limited energy of sensors represents the main challenge in the WSNs design~\cite{Ammari01}, where it is difficult to replace and/or
+ recharge their batteries because the the area of interest nature (such as hostile environments) and the cost. So, it is necessary that a WSN deployed with high density because spatial redundancy can then be exploited to increase the lifetime of the network . However, turn on all the sensor nodes, which monitor the same region at the same time leads to decrease the lifetime of the network. To extend the lifetime of the network, the main idea is to take advantage of the overlapping sensing regions of some sensor nodes to save energy by turning off some of them during the sensing phase~\cite{Misra05}. WSNs require energy-efficient solutions to improve the network lifetime that is constrained by the limited power of each sensor node ~\cite{Akyildiz02}.
+In this paper, we concentrate on the area
coverage problem, with the objective of maximizing the network
lifetime by using an adaptive scheduling. The area of interest is
divided into subregions and an activity scheduling for sensor nodes is
-planned for each subregion. Our scheduling scheme considers rounds,
-where a round starts with a discovery phase to exchange information
-between sensors of the subregion, in order to choose in suitable
-manner a sensor node to carry out a coverage strategy. This coverage
-strategy involves the solving of an integer program which provides
-the activation of the sensors for the sensing phase of the current
-round.
+planned for each subregion.
+ In fact, the nodes in a subregion can be seen as a cluster where
+ each node sends sensing data to the cluster head or the sink node.
+ Furthermore, the activities in a subregion/cluster can continue even
+ if another cluster stops due to too many node failures.
+Our scheduling scheme considers rounds, where a round starts with a
+discovery phase to exchange information between sensors of the
+subregion, in order to choose in a suitable manner a sensor node to
+carry out a coverage strategy. This coverage strategy involves the
+solving of an integer program, which provides the activation of the
+sensors for the sensing phase of the current round.
The remainder of the paper is organized as follows. The next section
% Section~\ref{rw}
reviews the related work in the field. Section~\ref{pd} is devoted to
the scheduling strategy for energy-efficient coverage.
-Section~\ref{cp} gives the coverage model formulation which is used to
+Section~\ref{cp} gives the coverage model formulation, which is used to
schedule the activation of sensors. Section~\ref{exp} shows the
-simulation results obtained using the discrete event simulator on
-OMNET++ \cite{varga}. They fully demonstrate the usefulness of the
+simulation results obtained using the discrete event simulator OMNeT++ \cite{varga}. They fully demonstrate the usefulness of the
proposed approach. Finally, we give concluding remarks and some
suggestions for future works in Section~\ref{sec:conclusion}.
-\section{\uppercase{Related works}}
+\section{Related works}
\label{rw}
-\noindent
-This section is dedicated to the various approaches proposed in the
-literature for the coverage lifetime maximization problem, where the
-objective is to optimally schedule sensors' activities in order to
-extend network lifetime in a randomly deployed network. As this
-problem is subject to a wide range of interpretations, we suggest to
-recall main definitions and assumptions related to our work.
+
+\noindent This section is dedicated to the various approaches proposed
+in the literature for the coverage lifetime maximization problem,
+where the objective is to optimally schedule sensors' activities in
+order to extend network lifetime in a randomly deployed network. As
+this problem is subject to a wide range of interpretations, we have chosen
+to recall the main definitions and assumptions related to our work.
%\begin{itemize}
%\item Area Coverage: The main objective is to cover an area. The area coverage requires
%\item Barrier Coverage An objective to determine the maximal support/breach paths that traverse a sensor field. Barrier coverage is expressed as finding one or more routes with starting position and ending position when the targets pass through the area deployed with sensor nodes~\cite{Santosh04,Ai05}.
%\end{itemize}
-{\bf Coverage}
+\subsection{Coverage}
+%{\bf Coverage}
The most discussed coverage problems in literature can be classified
into two types \cite{ma10}: area coverage (also called full or blanket
coverage) and target coverage. An area coverage problem is to find a
-minimum number of sensors to work such that each physical point in the
+minimum number of sensors to work, such that each physical point in the
area is within the sensing range of at least one working sensor node.
Target coverage problem is to cover only a finite number of discrete
points called targets. This type of coverage has mainly military
applications. Our work will concentrate on the area coverage by design
-and implementation of a strategy which efficiently selects the active
+and implementation of a strategy, which efficiently selects the active
nodes that must maintain both sensing coverage and network
-connectivity and in the same time improve the lifetime of the wireless
-sensor network. But requiring that all physical points of the
+connectivity and at the same time improve the lifetime of the wireless
+sensor network. But, requiring that all physical points of the
considered region are covered may be too strict, especially where the
sensor network is not dense. Our approach represents an area covered
by a sensor as a set of primary points and tries to maximize the total
minimizing overcoverage (points covered by multiple active sensors
simultaneously).
-\newpage
-
-{\bf Lifetime}
+\subsection{Lifetime}
+%{\bf Lifetime}
Various definitions exist for the lifetime of a sensor
-network~\cite{die09}. Main definitions proposed in the literature are
-related to the remaining energy of the nodes or to the percentage of
-coverage. The lifetime of the network is mainly defined as the amount
-of time that the network can satisfy its coverage objective (the
+network~\cite{die09}. The main definitions proposed in the literature are
+related to the remaining energy of the nodes or to the coverage percentage.
+The lifetime of the network is mainly defined as the amount
+of time during which the network can satisfy its coverage objective (the
amount of time that the network can cover a given percentage of its
area or targets of interest). In this work, we assume that the network
is alive until all nodes have been drained of their energy or the
active sensor node without connectivity towards a base station cannot
transmit information on an event in the area that it monitors.
-{\bf Activity scheduling}
+\subsection{Activity scheduling}
+%{\bf Activity scheduling}
Activity scheduling is to schedule the activation and deactivation of
sensor nodes. The basic objective is to decide which sensors are in
-what states (active or sleeping mode) and for how long, such that the
+what states (active or sleeping mode) and for how long, so that the
application coverage requirement can be guaranteed and the network
lifetime can be prolonged. Various approaches, including centralized,
distributed, and localized algorithms, have been proposed for activity
-scheduling. In the distributed algorithms, each node in the network
+scheduling. In distributed algorithms, each node in the network
autonomously makes decisions on whether to turn on or turn off itself
only using local neighbor information. In centralized algorithms, a
central controller (a node or base station) informs every sensors of
the time intervals to be activated.
-{\bf Distributed approaches}
+\subsection{Distributed approaches}
+%{\bf Distributed approaches}
Some distributed algorithms have been developed
-in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02} to perform the schelduling. Distributed
-algorithms typically operate in rounds for predetermined duration. At
-the beginning of each round, a sensor exchange information with its
-neighbors and makes a decision to either remain turned on or to go to
-sleep for the round. This decision is basically based on simple greedy
-criteria like the largest uncovered area
-\cite{Berman05efficientenergy}, maximum uncovered targets
+in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02} to perform the
+scheduling. Distributed algorithms typically operate in rounds for
+a predetermined duration. At the beginning of each round, a sensor
+exchanges information with its neighbors and makes a decision to either
+remain turned on or to go to sleep for the round. This decision is
+basically made on simple greedy criteria like the largest uncovered
+area \cite{Berman05efficientenergy}, maximum uncovered targets
\cite{1240799}. In \cite{Tian02}, the scheduling scheme is divided
into rounds, where each round has a self-scheduling phase followed by
-a sensing phase. Each sensor broadcasts a message containing node ID
-and node location to its neighbors at the beginning of each round. A
-sensor determines its status by a rule named off-duty eligible rule
+a sensing phase. Each sensor broadcasts a message containing the node ID
+and the node location to its neighbors at the beginning of each round. A
+sensor determines its status by a rule named off-duty eligible rule,
which tells him to turn off if its sensing area is covered by its
neighbors. A back-off scheme is introduced to let each sensor delay
the decision process with a random period of time, in order to avoid
-that nodes make conflicting decisions simultaneously and that a part
-of the area is no longer covered.
+simultaneous conflicting decisions between nodes and lack of coverage on any area.
\cite{Prasad:2007:DAL:1782174.1782218} defines a model for capturing
the dependencies between different cover sets and proposes localized
heuristic based on this dependency. The algorithm consists of two
-phases, an initial setup phase during which each sensor computes and
-prioritize the covers and a sensing phase during which each sensor
+phases, an initial setup phase during which each sensor computes and
+prioritizes the covers and a sensing phase during which each sensor
first decides its on/off status, and then remains on or off for the
rest of the duration. Authors in \cite{chin2007} propose a novel
distributed heuristic named Distributed Energy-efficient Scheduling
for k-coverage (DESK) so that the energy consumption among all the
sensors is balanced, and network lifetime is maximized while the
-coverage requirements is being maintained. This algorithm works in
+coverage requirement is being maintained. This algorithm works in
round, requires only 1-sensing-hop-neighbor information, and a sensor
decides its status (active/sleep) based on its perimeter coverage
computed through the k-Non-Unit-disk coverage algorithm proposed in
\cite{Huang:2003:CPW:941350.941367}.
-Some others approaches do not consider synchronized and predetermined
+Some other approaches do not consider a synchronized and predetermined
period of time where the sensors are active or not. Indeed, each
-sensor maintains its own timer and its time wake-up is randomized
+sensor maintains its own timer and its wake-up time is randomized
\cite{Ye03} or regulated \cite{cardei05} over time.
%A ecrire \cite{Abrams:2004:SKA:984622.984684}p33
%In this paper we focus on centralized algorithms because distributed algorithms are outside the scope of our work. Note that centralized coverage algorithms have the advantage of requiring very low processing power from the sensor nodes which have usually limited processing capabilities. Moreover, a recent study conducted in \cite{pc10} concludes that there is a threshold in terms of network size to switch from a localized to a centralized algorithm. Indeed the exchange of messages in large networks may consume a considerable amount of energy in a localized approach compared to a centralized one.
-{\bf Centralized approaches}
+\subsection{Centralized approaches}
+%{\bf Centralized approaches}
Power efficient centralized schemes differ according to several
criteria \cite{Cardei:2006:ECP:1646656.1646898}, such as the coverage
where each set completely covers an interest region and to activate
these set covers successively.
-First algorithms proposed in the literature consider that the cover
+The first algorithms proposed in the literature consider that the cover
sets are disjoint: a sensor node appears in exactly one of the
-generated cover sets. For instance, Slijepcevic and Potkonjak
-\cite{Slijepcevic01powerefficient} propose an algorithm which
+generated cover sets. For instance, Slijepcevic and Potkonjak
+\cite{Slijepcevic01powerefficient} propose an algorithm, which
allocates sensor nodes in mutually independent sets to monitor an area
-divided into several fields. Their algorithm builds a cover set by
-including in priority the sensor nodes which cover critical fields,
+divided into several fields. Their algorithm builds a cover set by
+including in priority the sensor nodes, which cover critical fields,
that is to say fields that are covered by the smallest number of
sensors. The time complexity of their heuristic is $O(n^2)$ where $n$
-is the number of sensors. \cite{cardei02}~describes a graph coloring
-technique to achieve energy savings by organizing the sensor nodes
-into a maximum number of disjoint dominating sets which are activated
+is the number of sensors. In~\cite{cardei02}, a graph coloring
+technique is described to achieve energy savings by organizing the sensor nodes
+into a maximum number of disjoint dominating sets, which are activated
successively. The dominating sets do not guarantee the coverage of the
whole region of interest. Abrams et
-al.~\cite{Abrams:2004:SKA:984622.984684} design three approximation
+al.~\cite{Abrams:2004:SKA:984622.984684} design three approximation
algorithms for a variation of the set k-cover problem, where the
objective is to partition the sensors into covers such that the number
-of covers that include an area, summed over all areas, is maximized.
+of covers that includes an area, summed over all areas, is maximized.
Their work builds upon previous work
in~\cite{Slijepcevic01powerefficient} and the generated cover sets do
not provide complete coverage of the monitoring zone.
\cite{Cardei:2005:IWS:1160086.1160098} propose a method to efficiently
compute the maximum number of disjoint set covers such that each set
can monitor all targets. They first transform the problem into a
-maximum flow problem which is formulated as a mixed integer
+maximum flow problem, which is formulated as a mixed integer
programming (MIP). Then their heuristic uses the output of the MIP to
-compute disjoint set covers. Results show that these heuristic
+compute disjoint set covers. Results show that this heuristic
provides a number of set covers slightly larger compared to
\cite{Slijepcevic01powerefficient} but with a larger execution time
due to the complexity of the mixed integer programming resolution.
%More recently Manju and Pujari\cite{Manju2011}
In the case of non-disjoint algorithms \cite{Manju2011}, sensors may
-participate in more than one cover set. In some cases this may
+participate in more than one cover set. In some cases, this may
prolong the lifetime of the network in comparison to the disjoint
cover set algorithms, but designing algorithms for non-disjoint cover
sets generally induces a higher order of complexity. Moreover, in
lifetime increases compared with related
work~\cite{Cardei:2005:IWS:1160086.1160098}. In~\cite{berman04}, the
authors have formulated the lifetime problem and suggested another
-(LP) technique to solve this problem. A centralized provably near
-optimal solution based on the Garg-K\"{o}nemann
-algorithm~\cite{garg98} is also proposed.
+(LP) technique to solve this problem. A centralized solution based on the Garg-K\"{o}nemann
+algorithm~\cite{garg98}, provably near
+the optimal solution, is also proposed.
-{\bf Our contribution}
+\subsection{Our contribution}
+%{\bf Our contribution}
-There are three main questions which should be addressed to build a
+There are three main questions, which should be addressed to build a
scheduling strategy. We give a brief answer to these three questions
to describe our approach before going into details in the subsequent
sections.
\begin{itemize}
-\item {\bf How must the phases for information exchange,
- decision and sensing be planned over time?} Our algorithm divides the time
- line into a number of rounds. Each round contains 4 phases:
- Information Exchange, Leader Election, Decision, and Sensing.
+\item {\bf How must the phases for information exchange, decision and
+ sensing be planned over time?} Our algorithm divides the time line
+ into a number of rounds. Each round contains 4 phases: Information
+ Exchange, Leader Election, Decision, and Sensing.
\item {\bf What are the rules to decide which node has to be turned on
or off?} Our algorithm tends to limit the overcoverage of points of
- interest to avoid turning on too much sensors covering the same
+ interest to avoid turning on too many sensors covering the same
areas at the same time, and tries to prevent undercoverage. The
decision is a good compromise between these two conflicting
objectives.
-\item {\bf Which node should make such decision?} As mentioned in
+\item {\bf Which node should make such a decision?} As mentioned in
\cite{pc10}, both centralized and distributed algorithms have their
own advantages and disadvantages. Centralized coverage algorithms
have the advantage of requiring very low processing power from the
- sensor nodes which have usually limited processing capabilities.
+ sensor nodes, which have usually limited processing capabilities.
Distributed algorithms are very adaptable to the dynamic and
scalable nature of sensors network. Authors in \cite{pc10} conclude
that there is a threshold in terms of network size to switch from a
- localized to a centralized algorithm. Indeed the exchange of
+ localized to a centralized algorithm. Indeed, the exchange of
messages in large networks may consume a considerable amount of
- energy in a localized approach compared to a centralized one. Our
+ energy in a centralized approach compared to a distributed one. Our
work does not consider only one leader to compute and to broadcast
- the schedule decision to all the sensors. When the network size
- increases, the network is divided in many subregions and the
+ the scheduling decision to all the sensors. When the network size
+ increases, the network is divided into many subregions and the
decision is made by a leader in each subregion.
\end{itemize}
-\section{\uppercase{Activity scheduling}}
+\section{Activity scheduling}
\label{pd}
We consider a randomly and uniformly deployed network consisting of
Each round is divided into 4 phases : Information (INFO) Exchange,
Leader Election, Decision, and Sensing. For each round there is
-exactly one set cover responsible for sensing task. This protocol is
-more reliable against the unexpectedly node failure because it works
+exactly one set cover responsible for the sensing task. This protocol is
+more reliable against an unexpected node failure because it works
in rounds. On the one hand, if a node failure is detected before
-taking the decision, the node will not participate to this phase, and,
+making the decision, the node will not participate to this phase, and,
on the other hand, if the node failure occurs after the decision, the
-sensing task of the network will be affected temporarily: only during
+sensing task of the network will be temporarily affected: only during
the period of sensing until a new round starts, since a new set cover
will take charge of the sensing task in the next round. The energy
consumption and some other constraints can easily be taken into
round. However, the pre-sensing phases (INFO Exchange, Leader
Election, Decision) are energy consuming for some nodes, even when
they do not join the network to monitor the area. Below, we describe
-each phase in more detail.
+each phase in more details.
-\subsection{\textbf INFOrmation Exchange Phase}
+\subsection{Information exchange phase}
Each sensor node $j$ sends its position, remaining energy $RE_j$, and
-the number of local neighbors $NBR_j$ to all wireless sensor nodes in
+the number of local neighbours $NBR_j$ to all wireless sensor nodes in
its subregion by using an INFO packet and then listens to the packets
sent from other nodes. After that, each node will have information
about all the sensor nodes in the subregion. In our model, the
%The working phase works in rounding fashion. Each round include 3 steps described as follow :
-\subsection{\textbf Leader Election Phase}
-This step includes choosing the Wireless Sensor Node Leader (WSNL)
-which will be responsible of executing coverage algorithm. Each
+\subsection{Leader election phase}
+This step includes choosing the Wireless Sensor Node Leader (WSNL),
+which will be responsible for executing the coverage algorithm. Each
subregion in the area of interest will select its own WSNL
-independently for each round. All the sensor nodes cooperates to
+independently for each round. All the sensor nodes cooperate to
select WSNL. The nodes in the same subregion will select the leader
based on the received information from all other nodes in the same
subregion. The selection criteria in order of priority are: larger
-number of neighbors, larger remaining energy, and then in case of
+number of neighbours, larger remaining energy, and then in case of
equality, larger index.
-\subsection{\textbf Decision Phase}
+\subsection{Decision phase}
The WSNL will solve an integer program (see section~\ref{cp}) to
select which sensors will be activated in the following sensing phase
to cover the subregion. WSNL will send Active-Sleep packet to each
-sensor in the subregion based on algorithm's results.
+sensor in the subregion based on the algorithm's results.
%The main goal in this step after choosing the WSNL is to produce the best representative active nodes set that will take the responsibility of covering the whole region $A^k$ with minimum number of sensor nodes to prolong the lifetime in the wireless sensor network. For our problem, in each round we need to select the minimum set of sensor nodes to improve the lifetime of the network and in the same time taking into account covering the region $A^k$ . We need an optimal solution with tradeoff between our two conflicting objectives.
%The above region coverage problem can be formulated as a Multi-objective optimization problem and we can use the Binary Particle Swarm Optimization technique to solve it.
-\subsection{\textbf Sensing Phase}
+\subsection{Sensing phase}
Active sensors in the round will execute their sensing task to
preserve maximal coverage in the region of interest. We will assume
-that the cost of keeping a node awake (or sleep) for sensing task is
+that the cost of keeping a node awake (or asleep) for sensing task is
the same for all wireless sensor nodes in the network. Each sensor
-will receive an Active-Sleep packet from WSNL telling him to stay
-awake or go sleep for a time equal to the period of sensing until
+will receive an Active-Sleep packet from WSNL informing it to stay
+awake or to go to sleep for a time equal to the period of sensing until
starting a new round.
%\subsection{Sensing coverage model}
%\noindent We try to produce an adaptive scheduling which allows sensors to operate alternatively so as to prolong the network lifetime. For convenience, the notations and assumptions are described first.
%The wireless sensor node use the binary disk sensing model by which each sensor node will has a certain sensing range is reserved within a circular disk called radius $R_s$.
-\noindent We consider a boolean disk coverage model which is the most
+\indent We consider a boolean disk coverage model which is the most
widely used sensor coverage model in the literature. Each sensor has a
constant sensing range $R_s$. All space points within a disk centered
at the sensor with the radius of the sensing range is said to be
covered by this sensor. We also assume that the communication range is
-at least twice of the sensing range. In fact, Zhang and
-Zhou~\cite{Zhang05} prove that if the transmission range fulfills the
+at least twice the size of the sensing range. In fact, Zhang and
+Zhou~\cite{Zhang05} proved that if the transmission range fulfills the
previous hypothesis, a complete coverage of a convex area implies
connectivity among the working nodes in the active mode.
%To calculate the coverage ratio for the area of interest, we can propose the following coverage model which is called Wireless Sensor Node Area Coverage Model to ensure that all the area within each node sensing range are covered. We can calculate the positions of the points in the circle disc of the sensing range of wireless sensor node based on the Unit Circle in figure~\ref{fig:cluster1}:
%We choose to representEach wireless sensor node will be represented into a selected number of primary points by which we can know if the sensor node is covered or not.
% Figure ~\ref{fig:cluster2} shows the selected primary points that represents the area of the sensor node and according to the sensing range of the wireless sensor node.
-\noindent Instead of working with area coverage, we consider for each
+\indent Instead of working with the coverage area, we consider for each
sensor a set of points called primary points. We also assume that the
-sensing disk defined by a sensor is covered if all primary points of
+sensing disk defined by a sensor is covered if all the primary points of
this sensor are covered.
%\begin{figure}[h!]
%\centering
based on the proposed model. We use these primary points (that can be
increased or decreased if necessary) as references to ensure that the
monitored region of interest is covered by the selected set of
-sensors, instead of using all points in the area.
+sensors, instead of using all the points in the area.
-\noindent We can calculate the positions of the selected primary
+\indent We can calculate the positions of the selected primary
points in the circle disk of the sensing range of a wireless sensor
node (see figure~\ref{fig2}) as follows:\\
$(p_x,p_y)$ = point center of wireless sensor node\\
\label{fig2}
\end{figure}
-\section{\uppercase{Coverage problem formulation}}
+\section{Coverage problem formulation}
\label{cp}
%We can formulate our optimization problem as energy cost minimization by minimize the number of active sensor nodes and maximizing the coverage rate at the same time in each $A^k$ . This optimization problem can be formulated as follow: Since that we use a homogeneous wireless sensor network, we will assume that the cost of keeping a node awake is the same for all wireless sensor nodes in the network. We can define the decision parameter $X_j$ as in \eqref{eq11}:\\
%To satisfy the coverage requirement, the set of the principal points that will represent all the sensor nodes in the monitored region as $PSET= \lbrace P_1,\ldots ,P_p, \ldots , P_{N_P^k} \rbrace $, where $N_P^k = N_{sp} * N^k $ and according to the proposed model in figure ~\ref{fig:cluster2}. These points can be used by the wireless sensor node leader which will be chosen in each region in A to build a new parameter $\alpha_{jp}$ that represents the coverage possibility for each principal point $P_p$ of each wireless sensor node $s_j$ in $A^k$ as in \eqref{eq12}:\\
-\noindent Our model is based on the model proposed by
+\indent Our model is based on the model proposed by
\cite{pedraza2006} where the objective is to find a maximum number of
-disjoint cover sets. To accomplish this goal, authors propose an
-integer program which forces undercoverage and overcoverage of targets
+disjoint cover sets. To accomplish this goal, authors proposed an
+integer program, which forces undercoverage and overcoverage of targets
to become minimal at the same time. They use binary variables
$x_{jl}$ to indicate if sensor $j$ belongs to cover set $l$. In our
-model, we consider binary variables $X_{j}$ which determine the
+model, we consider binary variables $X_{j}$, which determine the
activation of sensor $j$ in the sensing phase of the round. We also
consider primary points as targets. The set of primary points is
denoted by $P$ and the set of sensors by $J$.
\end{array} \right.
%\label{eq12}
\end{equation}
-The number of sensors that are covering point $p$ is equal to
-$\sum_{j \in J} \alpha_{jp} * X_{j}$ where:
+The number of active sensors that cover the primary point $p$ is equal
+to $\sum_{j \in J} \alpha_{jp} * X_{j}$ where:
\begin{equation}
X_{j} = \left \{
\begin{array}{l l}
\begin{equation}
\Theta_{p} = \left \{
\begin{array}{l l}
- 0 & \mbox{if point $p$ is not covered,}\\
+ 0 & \mbox{if the primary point}\\
+ & \mbox{$p$ is not covered,}\\
\left( \sum_{j \in J} \alpha_{jp} * X_{j} \right)- 1 & \mbox{otherwise.}\\
\end{array} \right.
\label{eq13}
\begin{equation}
U_{p} = \left \{
\begin{array}{l l}
- 1 &\mbox{if point $p$ is not covered,} \\
+ 1 &\mbox{if the primary point $p$ is not covered,} \\
0 & \mbox{otherwise.}\\
\end{array} \right.
\label{eq14}
\right.
\end{equation}
\begin{itemize}
-\item $X_{j}$ : indicates whether or not sensor $j$ is actively
+\item $X_{j}$ : indicates whether or not the sensor $j$ is actively
sensing in the round (1 if yes and 0 if not);
\item $\Theta_{p}$ : {\it overcoverage}, the number of sensors minus
- one that are covering point $p$;
-\item $U_{p}$ : {\it undercoverage}, indicates whether or not point
+ one that are covering the primary point $p$;
+\item $U_{p}$ : {\it undercoverage}, indicates whether or not the primary point
$p$ is being covered (1 if not covered and 0 if covered).
\end{itemize}
-The first group of constraints indicates that some point $p$ should be
-covered by at least one sensor and, if it is not always the case,
-overcoverage and undercoverage variables help balance the restriction
-equation by taking positive values. There are two main objectives.
-First we limit overcoverage of primary points in order to activate a
-minimum number of sensors. Second we prevent that parts of the
-subregion are not monitored by minimizing undercoverage. The weights
-$w_\theta$ and $w_U$ must be properly chosen so as to guarantee that
-the maximum number of points are covered during each round.
+The first group of constraints indicates that some primary point $p$
+should be covered by at least one sensor and, if it is not always the
+case, overcoverage and undercoverage variables help balancing the
+restriction equations by taking positive values. There are two main
+objectives. First, we limit the overcoverage of primary points in order to
+activate a minimum number of sensors. Second we prevent the absence of monitoring on
+ some parts of the subregion by minimizing the undercoverage. The
+weights $w_\theta$ and $w_U$ must be properly chosen so as to
+guarantee that the maximum number of points are covered during each
+round.
%In equation \eqref{eq15}, there are two main objectives: the first one using the Overcoverage parameter to minimize the number of active sensor nodes in the produced final solution vector $X$ which leads to improve the life time of wireless sensor network. The second goal by using the Undercoverage parameter to maximize the coverage in the region by means of covering each primary point in $SSET^k$.The two objectives are achieved at the same time. The constraint which represented in equation \eqref{eq16} refer to the coverage function for each primary point $P_p$ in $SSET^k$ , where each $P_p$ should be covered by
%at least one sensor node in $A^k$. The objective function in \eqref{eq15} involving two main objectives to be optimized simultaneously, where optimal decisions need to be taken in the presence of trade-offs between the two conflicting main objectives in \eqref{eq15} and this refer to that our coverage optimization problem is a multi-objective optimization problem and we can use the BPSO to solve it. The concept of Overcoverage and Undercoverage inspired from ~\cite{Fernan12} but we use it with our model as stated in subsection \ref{Sensing Coverage Model} with some modification to be applied later by BPSO.
%\end{itemize}
-\section{\uppercase{Simulation Results}}
+\section{Simulation results}
\label{exp}
-In this section, we conducted a series of simulations, to evaluate the
-efficiency and relevance of our approach, using the discrete event
+In this section, we conducted a series of simulations to evaluate the
+efficiency and the relevance of our approach, using the discrete event
simulator OMNeT++ \cite{varga}. We performed simulations for five
different densities varying from 50 to 250~nodes. Experimental results
were obtained from randomly generated networks in which nodes are
-deployed over a $(50 \times 25)~m^2 $ sensing field. For each network
-deployment, we assume that the deployed nodes can fully cover the
-sensing field with the given sensing range. 10 simulation runs are
-performed with different network topologies for each node density.
-The results presented hereafter are the average of these 10 runs. A
-simulation ends when all the nodes are dead or the sensor network
-becomes disconnected (some nodes may not be able to sent to a base
-station an event they sense).
+deployed over a $(50 \times 25)~m^2 $ sensing field.
+More precisely, the deployment is controlled at a coarse scale in
+ order to ensure that the deployed nodes can fully cover the sensing
+ field with the given sensing range.
+10~simulation runs are performed with
+different network topologies for each node density. The results
+presented hereafter are the average of these 10 runs. A simulation
+ends when all the nodes are dead or the sensor network becomes
+disconnected (some nodes may not be able to send, to a base station, an
+event they sense).
Our proposed coverage protocol uses the radio energy dissipation model
defined by~\cite{HeinzelmanCB02} as energy consumption model for each
wireless sensor node when transmitting or receiving packets. The
energy of each node in a network is initialized randomly within the
range 24-60~joules, and each sensor node will consume 0.2 watts during
-the sensing period which will have a duration of 60 seconds. Thus, an
-active node will consume 12~joules during sensing phase, while a
+the sensing period, which will last 60 seconds. Thus, an
+active node will consume 12~joules during the sensing phase, while a
sleeping node will use 0.002 joules. Each sensor node will not
-participate in the next round if it's remaining energy is less than 12
-joules. In all experiments the parameters are set as follows:
-$R_s=5m$, $w_{\Theta}=1$, and $w_{U}=|P^2|$.
+participate in the next round if its remaining energy is less than 12
+joules. In all experiments, the parameters are set as follows:
+$R_s=5~m$, $w_{\Theta}=1$, and $w_{U}=|P^2|$.
-We evaluate the efficiency of our approach using some performance
+We evaluate the efficiency of our approach by using some performance
metrics such as: coverage ratio, number of active nodes ratio, energy
saving ratio, energy consumption, network lifetime, execution time,
-and number of stopped simulation runs. Our approach called Strategy~2
-(with Two Leaders) works with two subregions, each one having a size
+and number of stopped simulation runs. Our approach called strategy~2
+(with two leaders) works with two subregions, each one having a size
of $(25 \times 25)~m^2$. Our strategy will be compared with two other
-approaches. The first one, called Strategy~1 (with One Leader), works
-as Strategy~2, but considers only one region of $(50 \times 25)$ $m^2$
+approaches. The first one, called strategy~1 (with one leader), works
+as strategy~2, but considers only one region of $(50 \times 25)$ $m^2$
with only one leader. The other approach, called Simple Heuristic,
-consists in dividing uniformly the region into squares of $(5 \times
+consists in uniformly dividing the region into squares of $(5 \times
5)~m^2$. During the decision phase, in each square, a sensor is
randomly chosen, it will remain turned on for the coming sensing
phase.
-\subsection{The impact of the Number of Rounds on Coverage Ratio}
+\subsection{The impact of the number of rounds on the coverage ratio}
In this experiment, the coverage ratio measures how much the area of a
sensor field is covered. In our case, the coverage ratio is regarded
for the three approaches. It can be seen that the three approaches
give similar coverage ratios during the first rounds. From the
9th~round the coverage ratio decreases continuously with the simple
-heuristic, while the other two strategies provide superior coverage to
+heuristic, while the two other strategies provide superior coverage to
$90\%$ for five more rounds. Coverage ratio decreases when the number
of rounds increases due to dead nodes. Although some nodes are dead,
thanks to strategy~1 or~2, other nodes are preserved to ensure the
coverage. Moreover, when we have a dense sensor network, it leads to
-maintain the full coverage for larger number of rounds. Strategy~2 is
-slightly more efficient that strategy 1, because strategy~2 subdivides
+maintain the full coverage for a larger number of rounds. Strategy~2 is
+slightly more efficient than strategy 1, because strategy~2 subdivides
the region into 2~subregions and if one of the two subregions becomes
-disconnected, coverage may be still ensured in the remaining
+disconnected, the coverage may be still ensured in the remaining
subregion.
\parskip 0pt
\begin{figure}[h!]
\centering
-\includegraphics[scale=0.55]{TheCoverageRatio150.eps} %\\~ ~ ~(a)
-\caption{The impact of the Number of Rounds on Coverage Ratio for 150 deployed nodes}
+\includegraphics[scale=0.5]{TheCoverageRatio150g.eps} %\\~ ~ ~(a)
+\caption{The impact of the number of rounds on the coverage ratio for 150 deployed nodes}
\label{fig3}
\end{figure}
-\subsection{The impact of the Number of Rounds on Active Sensors Ratio}
+\subsection{The impact of the number of rounds on the active sensors ratio}
It is important to have as few active nodes as possible in each round,
in order to minimize the communication overhead and maximize the
network lifetime. This point is assessed through the Active Sensors
-Ratio, which is defined as follows:
+Ratio (ASR), which is defined as follows:
\begin{equation*}
\scriptsize
\mbox{ASR}(\%) = \frac{\mbox{Number of active sensors
\begin{figure}[h!]
\centering
-\includegraphics[scale=0.55]{TheActiveSensorRatio150.eps} %\\~ ~ ~(a)
-\caption{The impact of the Number of Rounds on Active Sensors Ratio for 150 deployed nodes }
+\includegraphics[scale=0.5]{TheActiveSensorRatio150g.eps} %\\~ ~ ~(a)
+\caption{The impact of the number of rounds on the active sensors ratio for 150 deployed nodes }
\label{fig4}
\end{figure}
The results presented in figure~\ref{fig4} show the superiority of
-both proposed strategies, the Strategy with Two Leaders and the one
-with a single Leader, in comparison with the Simple Heuristic. The
-Strategy with One Leader uses less active nodes than the Strategy with
-Two Leaders until the last rounds, because it uses central control on
-the whole sensing field. The advantage of the Strategy~2 approach is
+both proposed strategies, the strategy with two leaders and the one
+with a single leader, in comparison with the simple heuristic. The
+strategy with one leader uses less active nodes than the strategy with
+two leaders until the last rounds, because it uses central control on
+the whole sensing field. The advantage of the strategy~2 approach is
that even if a network is disconnected in one subregion, the other one
usually continues the optimization process, and this extends the
lifetime of the network.
-\subsection{The impact of the Number of Rounds on Energy Saving Ratio}
+\subsection{The impact of the number of rounds on the energy saving ratio}
In this experiment, we consider a performance metric linked to energy.
-This metric, called Energy Saving Ratio, is defined by:
+This metric, called Energy Saving Ratio (ESR), is defined by:
\begin{equation*}
\scriptsize
\mbox{ESR}(\%) = \frac{\mbox{Number of alive sensors during this round}}
{\mbox{Total number of sensors in the network for the region}} \times 100.
\end{equation*}
-The longer the ratio is high, the more redundant sensor nodes are
-switched off, and consequently the longer the network may be alive.
+The longer the ratio is, the more redundant sensor nodes are
+switched off, and consequently the longer the network may live.
Figure~\ref{fig5} shows the average Energy Saving Ratio versus rounds
for all three approaches and for 150 deployed nodes.
%\centering
% \begin{multicols}{6}
\centering
-\includegraphics[scale=0.55]{TheEnergySavingRatio150.eps} %\\~ ~ ~(a)
-\caption{The impact of the Number of Rounds on Energy Saving Ratio for 150 deployed nodes}
+\includegraphics[scale=0.5]{TheEnergySavingRatio150g.eps} %\\~ ~ ~(a)
+\caption{The impact of the number of rounds on the energy saving ratio for 150 deployed nodes}
\label{fig5}
\end{figure}
The simulation results show that our strategies allow to efficiently
save energy by turning off some sensors during the sensing phase. As
-expected, the Strategy with One Leader is usually slightly better than
-the second strategy, because the global optimization permit to turn
+expected, the strategy with one leader is usually slightly better than
+the second strategy, because the global optimization permits to turn
off more sensors. Indeed, when there are two subregions more nodes
remain awake near the border shared by them. Note that again as the
-number of rounds increase the two leader strategy becomes the most
-performing, since its takes longer to have the two subregion networks
+number of rounds increases the two leaders' strategy becomes the most
+performing one, since it takes longer to have the two subregion networks
simultaneously disconnected.
-\subsection{The Network Lifetime}
+\subsection{The percentage of stopped simulation runs}
-We have defined the network lifetime as the time until all nodes have
-been drained of their energy or each sensor network monitoring a area
-becomes disconnected. In figure~\ref{fig6}, the network lifetime for
-different network sizes and for the three approaches is illustrated.
+We will now study the percentage of simulations, which stopped due to
+network disconnections per round for each of the three approaches.
+Figure~\ref{fig6} illustrates the percentage of stopped simulation
+runs per round for 150 deployed nodes. It can be observed that the
+simple heuristic is the approach, which stops first because the nodes
+are randomly chosen. Among the two proposed strategies, the
+centralized one first exhibits network disconnections. Thus, as
+explained previously, in case of the strategy with several subregions
+the optimization effectively continues as long as a network in a
+subregion is still connected. This longer partial coverage
+optimization participates in extending the network lifetime.
\begin{figure}[h!]
-%\centering
-% \begin{multicols}{6}
\centering
-\includegraphics[scale=0.5]{TheNetworkLifetime.eps} %\\~ ~ ~(a)
-\caption{The Network Lifetime }
+\includegraphics[scale=0.5]{TheNumberofStoppedSimulationRuns150g.eps}
+\caption{The percentage of stopped simulation runs compared to the number of rounds for 150 deployed nodes }
\label{fig6}
\end{figure}
-As highlighted by figure~\ref{fig6}, the network lifetime obviously
-increases when the size of the network increase, with our approaches
-that lead to the larger lifetime improvement. By choosing for each
-round the well suited nodes to cover the region of interest and by
-leaving sleep the other ones to be used later in next rounds, both
-proposed strategies efficiently prolong the lifetime. Comparison shows
-that the larger the sensor number, the more our strategies outperform
-the heuristic. Strategy~2, which uses two leaders, is the best one
-because it is robust to network disconnection in one subregion. It
-also means that distributing the algorithm in each node and
-subdividing the sensing field into many subregions, which are managed
-independently and simultaneously, is the most relevant way to maximize
-the lifetime of a network.
-
-\subsection{The Energy Consumption}
+\subsection{The energy consumption}
In this experiment, we study the effect of the multi-hop communication
-protocol on the performance of the Strategy with Two Leaders and
+protocol on the performance of the strategy with two leaders and
compare it with the other two approaches. The average energy
consumption resulting from wireless communications is calculated
-considering the energy spent by all the nodes when transmitting and
+by taking into account the energy spent by all the nodes when transmitting and
receiving packets during the network lifetime. This average value,
which is obtained for 10~simulation runs, is then divided by the
average number of rounds to define a metric allowing a fair comparison
between networks having different densities.
-Figure~\ref{fig7} illustrates the Energy Consumption for the different
+Figure~\ref{fig7} illustrates the energy consumption for the different
network sizes and the three approaches. The results show that the
-Strategy with Two Leaders is the most competitive from energy
-consumption point of view. A centralized method, like the Strategy
-with One Leader, has a high energy consumption due to the many
+strategy with two leaders is the most competitive from the energy
+consumption point of view. A centralized method, like the strategy
+with one leader, has a high energy consumption due to many
communications. In fact, a distributed method greatly reduces the
number of communications thanks to the partitioning of the initial
network in several independent subnetworks. Let us notice that even if
a centralized method consumes far more energy than the simple
heuristic, since the energy cost of communications during a round is a
small part of the energy spent in the sensing phase, the
-communications have a small impact on the lifetime.
+communications have a small impact on the network lifetime.
\begin{figure}[h!]
\centering
-\includegraphics[scale=0.55]{TheEnergyConsumption.eps}
-\caption{The Energy Consumption }
+\includegraphics[scale=0.5]{TheEnergyConsumptiong.eps}
+\caption{The energy consumption}
\label{fig7}
\end{figure}
-\subsection{The impact of Number of Sensors on Execution Time}
+\subsection{The impact of the number of sensors on execution time}
A sensor node has limited energy resources and computing power,
therefore it is important that the proposed algorithm has the shortest
possible execution time. The energy of a sensor node must be mainly
-used for the sensing phase, not for the pre-sensing ones.
-Table~\ref{table1} gives the average execution times on a laptop of
-the decision phase during one round. They are given for the different
-approaches and various numbers of sensors. The lack of any
-optimization explains why the heuristic has very low execution times.
-Conversely, the Strategy with One Leader which requires to solve an
-optimization problem considering all the nodes presents redhibitory
-execution times. Moreover, increasing of 50~nodes the network size
-multiplies the time by almost a factor of 10. The Strategy with Two
-Leaders has more suitable times. We think that in distributed fashion
-the solving of the optimization problem in a subregion can be tackled
-by sensor nodes. Overall, to be able deal with very large networks a
+used for the sensing phase, not for the pre-sensing ones.
+Table~\ref{table1} gives the average execution times in seconds
+on a laptop of the decision phase (solving of the optimization problem)
+during one round. They are given for the different approaches and
+various numbers of sensors. The lack of any optimization explains why
+the heuristic has very low execution times. Conversely, the strategy
+with one leader, which requires to solve an optimization problem
+considering all the nodes presents redhibitory execution times.
+Moreover, increasing the network size by 50~nodes multiplies the time
+by almost a factor of 10. The strategy with two leaders has more
+suitable times. We think that in distributed fashion the solving of
+the optimization problem in a subregion can be tackled by sensor
+nodes. Overall, to be able to deal with very large networks, a
distributed method is clearly required.
\begin{table}[ht]
-\caption{The Execution Time(s) vs The Number of Sensors }
+\caption{THE EXECUTION TIME(S) VS THE NUMBER OF SENSORS}
% title of Table
\centering
% centered columns (4 columns)
\hline
%inserts double horizontal lines
-Sensors Number & Strategy & Strategy & Simple Heuristic \\ [0.5ex]
- & (with Two Leaders) & (with One Leader) & \\ [0.5ex]
+Sensors number & Strategy~2 & Strategy~1 & Simple heuristic \\ [0.5ex]
+ & (with two leaders) & (with one leader) & \\ [0.5ex]
%Case & Strategy (with Two Leaders) & Strategy (with One Leader) & Simple Heuristic \\ [0.5ex]
% inserts table
%heading
% is used to refer this table in the text
\end{table}
-\subsection{The Number of Stopped Simulation Runs}
+\subsection{The network lifetime}
-Finally, we will study the number of simulation which stopped due to
-network disconnection, per round for each of the three approaches.
-Figure~\ref{fig8} illustrates the number of stopped simulation runs
-per round for 150 deployed nodes. It can be observed that the
-heuristic is the approach which stops the earlier because the nodes
-are chosen randomly. Among the two proposed strategies, the
-centralized one first exhibits network disconnection. Thus, as
-explained previously, in case of the strategy with several subregions
-the optimization effectively continues as long as a network in a
-subregion is still connected. This longer partial coverage
-optimization participates in extending the lifetime.
+Finally, we have defined the network lifetime as the time until all
+nodes have been drained of their energy or each sensor network
+monitoring an area has become disconnected. In figure~\ref{fig8}, the
+network lifetime for different network sizes and for both strategy
+with two leaders and the simple heuristic is illustrated.
+ We do not consider anymore the centralized strategy with one
+ leader, because, as shown above, this strategy results in execution
+ times that quickly become unsuitable for a sensor network.
\begin{figure}[h!]
+%\centering
+% \begin{multicols}{6}
\centering
-\includegraphics[scale=0.55]{TheNumberofStoppedSimulationRuns150.eps}
-\caption{The Number of Stopped Simulation Runs against Rounds for 150 deployed nodes }
+\includegraphics[scale=0.5]{TheNetworkLifetimeg.eps} %\\~ ~ ~(a)
+\caption{The network lifetime }
\label{fig8}
\end{figure}
-\section{\uppercase{Conclusions and Future Works}}
+As highlighted by figure~\ref{fig8}, the network lifetime obviously
+increases when the size of the network increases, with our approach
+that leads to the larger lifetime improvement. By choosing the best
+suited nodes, for each round, to cover the region of interest and by
+letting the other ones sleep in order to be used later in next rounds,
+our strategy efficiently prolonges the network lifetime. Comparison shows that
+the larger the sensor number is, the more our strategies outperform
+the simple heuristic. Strategy~2, which uses two leaders, is the best
+one because it is robust to network disconnection in one subregion. It
+also means that distributing the algorithm in each node and
+subdividing the sensing field into many subregions, which are managed
+independently and simultaneously, is the most relevant way to maximize
+the lifetime of a network.
+
+\section{Conclusion and future works}
\label{sec:conclusion}
-In this paper, we have addressed the problem of coverage and lifetime
+In this paper, we have addressed the problem of the coverage and the lifetime
optimization in wireless sensor networks. This is a key issue as
sensor nodes have limited resources in terms of memory, energy and
computational power. To cope with this problem, the field of sensing
divide-and-conquer method, and then a multi-rounds coverage protocol
will optimize coverage and lifetime performances in each subregion.
The proposed protocol combines two efficient techniques: network
-Leader Election and sensor activity scheduling, where the challenges
+leader election and sensor activity scheduling, where the challenges
include how to select the most efficient leader in each subregion and
-the best representative active nodes that will optimize the lifetime
+the best representative active nodes that will optimize the network lifetime
while taking the responsibility of covering the corresponding
subregion. The network lifetime in each subregion is divided into
rounds, each round consists of four phases: (i) Information Exchange,
(ii) Leader Election, (iii) an optimization-based Decision in order to
select the nodes remaining active for the last phase, and (iv)
-Sensing. The simulations results show the relevance of the proposed
-protocol in terms of lifetime, coverage ratio, active sensors Ratio,
+Sensing. The simulations show the relevance of the proposed
+protocol in terms of lifetime, coverage ratio, active sensors ratio,
energy saving, energy consumption, execution time, and the number of
stopped simulation runs due to network disconnection. Indeed, when
dealing with large and dense wireless sensor networks, a distributed
approach like the one we propose allows to reduce the difficulty of a
-single global optimization problem by partitioning it in many smaller
-problems, one per subregion, that can be solved more easily. In
-future, we plan to study and propose a coverage protocol which
-computes all active sensor schedules in a single round, using
+single global optimization problem by partitioning it in many smaller
+problems, one per subregion, that can be solved more easily.
+
+In future work, we plan to study and propose a coverage protocol, which
+computes all active sensor schedules in one time, using
optimization methods such as swarms optimization or evolutionary
-algorithms. The computation of all cover sets in one round is far more
+algorithms. The round will still consist of 4 phases, but the
+ decision phase will compute the schedules for several sensing phases,
+ which aggregated together, define a kind of meta-sensing phase.
+The computation of all cover sets in one time is far more
difficult, but will reduce the communication overhead.
-
% use section* for acknowledgement
%\section*{Acknowledgment}