scheduling performed by each elected leader. This two-step process takes
place periodically, in order to choose a small set of nodes remaining active
for sensing during a time slot. Each set is built to ensure coverage at a low
- energy cost, allowing to optimize the network lifetime.
-%More precisely, a
- %period consists of four phases: (i)~Information Exchange, (ii)~Leader
- %Election, (iii)~Decision, and (iv)~Sensing. The decision process, which
-% results in an activity scheduling vector, is carried out by a leader node
-% through the solving of an integer program.
-% MODIF - BEGIN
- Simulations are conducted using the discret event simulator
- OMNET++. We refer to the characterictics of a Medusa II sensor for
- the energy consumption and the computation time. In comparison with
- two other existing methods, our approach is able to increase the WSN
- lifetime and provides improved coverage performance. }
-% MODIF - END
+ energy cost, allowing to optimize the network lifetime. Simulations are conducted using the discrete event simulator OMNET++. We refer to the characterictics of a Medusa II sensor for the energy consumption and the computation time. In comparison with two other existing methods, our approach is able to increase the WSN lifetime and provides improved coverage performances. }
+
%\onecolumn
during the last few years, is the design of energy efficient approaches for
coverage and connectivity~\cite{conti2014mobile}. Coverage reflects how well a
sensor field is monitored. On the one hand we want to monitor the area of
-interest in the most efficient way~\cite{Nayak04}, \textcolor{blue}{which means
- that we want to maintain the best coverage as long as possible}. On the other
+interest in the most efficient way~\cite{Nayak04}, which means
+ that we want to maintain the best coverage as long as possible. On the other
hand we want to use as little energy as possible. Sensor nodes are
battery-powered with no means of recharging or replacing, usually due to
environmental (hostile or unpractical environments) or cost reasons. Therefore,
it is desired that the WSNs are deployed with high densities so as to exploit
the overlapping sensing regions of some sensor nodes to save energy by turning
off some of them during the sensing phase to prolong the network
-lifetime. \textcolor{blue}{A WSN can use various types of sensors such as
+lifetime. A WSN can use various types of sensors such as
\cite{ref17,ref19}: thermal, seismic, magnetic, visual, infrared, acoustic,
and radar. These sensors are capable of observing different physical
conditions such as: temperature, humidity, pressure, speed, direction,
kinds of objects, and mechanical stress levels on attached objects.
Consequently, there is a wide range of WSN applications such as~\cite{ref22}:
health-care, environment, agriculture, public safety, military, transportation
- systems, and industry applications.}
+ systems, and industry applications.
-In this paper we design a protocol that focuses on the area coverage problem
+In this paper we design a protocol that focuses on the area coverage problem
with the objective of maximizing the network lifetime. Our proposition, the
-Distributed Lifetime Coverage Optimization (DiLCO) protocol, maintains the
+Distributed Lifetime Coverage Optimization (DiLCO) protocol, maintains the
coverage and improves the lifetime in WSNs. The area of interest is first
-divided into subregions using a divide-and-conquer algorithm and an activity
+divided into subregions using a divide-and-conquer algorithm and an activity
scheduling for sensor nodes is then planned by the elected leader in each
subregion. In fact, the nodes in a subregion can be seen as a cluster where each
-node sends sensing data to the cluster head or the sink node. Furthermore, the
+node sends sensing data to the cluster head or the sink node. Furthermore, the
activities in a subregion/cluster can continue even if another cluster stops due
to too many node failures. Our DiLCO protocol considers periods, where a period
-starts with a discovery phase to exchange information between sensors of the
-same subregion, in order to choose in a suitable manner a sensor node (the
+starts with a discovery phase to exchange information between sensors of the
+same subregion, in order to choose in a suitable manner a sensor node (the
leader) to carry out the coverage strategy. In each subregion the activation of
-the sensors for the sensing phase of the current period is obtained by solving
-an integer program. The resulting activation vector is broadcast by a leader
-to every node of its subregion.
+the sensors for the sensing phase of the current period is obtained by solving
+an integer program. The resulting activation vector is broadcast by a leader to
+every node of its subregion.
% MODIF - BEGIN
Our previous paper ~\cite{idrees2014coverage} relies almost exclusively on the
paper we made more realistic simulations by taking into account the
characteristics of a Medusa II sensor ~\cite{raghunathan2002energy} to measure
the energy consumption and the computation time. We have implemented two other
-existing \textcolor{blue}{and distributed approaches} (DESK ~\cite{ChinhVu}, and
+existing and distributed approaches (DESK ~\cite{ChinhVu}, and
GAF ~\cite{xu2001geography}) in order to compare their performances with our
-approach. We also focus on performance analysis based on the number of
-subregions.
+approach. We focused on DESK and GAF protocols for two reasons.
+ First our protocol is inspired by both of them: DiLCO uses a regular division
+ of the area of interest as in GAF and a temporal division in rounds as in
+ DESK. Second, DESK and GAF are well-known protocols, easy to implement, and
+ often used as references for comparison. We also focus on performance
+analysis based on the number of subregions.
% MODIF - END
The remainder of the paper continues with Section~\ref{sec:Literature Review}
where a review of some related works is presented. The next section describes
-the DiLCO protocol, followed in Section~\ref{cp} by the coverage model
+the DiLCO protocol, followed in Section~\ref{cp} by the coverage model
formulation which is used to schedule the activation of
-sensors. Section~\ref{sec:Simulation Results and Analysis} shows the simulation
+sensors. Section~\ref{sec:Simulation Results and Analysis} shows the simulation
results. The paper ends with a conclusion and some suggestions for further work
in Section~\ref{sec:Conclusion and Future Works}.
requirements (e.g. area monitoring, connectivity, power efficiency). For
instance, Jaggi et al. \cite{jaggi2006} address the problem of maximizing
network lifetime by dividing sensors into the maximum number of disjoint subsets
-such that each subset can ensure both coverage and connectivity. A greedy
+so that each subset can ensure both coverage and connectivity. A greedy
algorithm is applied once to solve this problem and the computed sets are
activated in succession to achieve the desired network lifetime. Vu
\cite{chin2007}, Padmatvathy et al. \cite{pc10}, propose algorithms working in a
smaller subregions, and in each one, a node called the leader is in charge for
selecting the active sensors for the current period.}
+% MODIF - BEGIN
+ Our approach to select the leader node in a subregion is quite
+ different from cluster head selection methods used in LEACH
+ \cite{DBLP:conf/hicss/HeinzelmanCB00} or its variants
+ \cite{ijcses11}. Contrary to LEACH, the division of the area of interest is
+ supposed to be performed before the leader election. Moreover, we assume that
+ the sensors are deployed almost uniformly and with high density over the area
+ of interest, so that the division is fixed and regular. As in LEACH, our
+ protocol works in round fashion. In each round, during the pre-sensing phase,
+ nodes make autonomous decisions. In LEACH, each sensor elects itself to be a
+ cluster head, and each non-cluster head will determine its cluster for the
+ round. In our protocol, nodes in the same subregion select their leader. In
+ both protocols, the amount of remaining energy in each node is taken into
+ account to promote the nodes that have the most energy to become leader.
+ Contrary to the LEACH protocol where all sensors will be active during the
+ sensing-phase, our protocol allows to deactivate a subset of sensors through
+ an optimization process which significantly reduces the energy consumption.
+% MODIF - END
+
A large variety of coverage scheduling algorithms has been developed. Many of
the existing algorithms, dealing with the maximization of the number of cover
sets, are heuristics. These heuristics involve the construction of a cover set
primary points are covered. Obviously, the approximation of coverage is more or
less accurate according to the number of primary points.
-
\subsection{Main idea}
\label{main_idea}
\noindent We start by applying a divide-and-conquer algorithm to partition the
area of interest into smaller areas called subregions and then our protocol is
-executed simultaneously in each subregion. \textcolor{blue}{Sensor nodes are assumed to
-be deployed almost uniformly over the region and the subdivision of the area of interest is regular.}
+executed simultaneously in each subregion. Sensor nodes are
+ assumed to be deployed almost uniformly over the region and the subdivision of
+ the area of interest is regular.
\begin{figure}[ht!]
\centering
An outline of the protocol implementation is given by Algorithm~\ref{alg:DiLCO}
which describes the execution of a period by a node (denoted by $s_j$ for a
sensor node indexed by $j$). At the beginning a node checks whether it has
-enough energy \textcolor{blue}{(its energy should be greater than a fixed
- treshold $E_{th}$)} to stay active during the next sensing phase. If yes, it
+enough energy (its energy should be greater than a fixed
+ treshold $E_{th}$) to stay active during the next sensing phase. If yes, it
exchanges information with all the other nodes belonging to the same subregion:
it collects from each node its position coordinates, remaining energy ($RE_j$),
-ID, and the number of one-hop neighbors still alive. \textcolor{blue}{INFO
- packet contains two parts: header and data payload. The sensor ID is included
+ID, and the number of one-hop neighbors still alive. INFO
+ packet contains two parts: header and payload data. The sensor ID is included
in the header, where the header size is 8 bits. The data part includes
position coordinates (64 bits), remaining energy (32 bits), and the number of
one-hop live neighbors (8 bits). Therefore the size of the INFO packet is 112
- bits.} Once the first phase is completed, the nodes of a subregion choose a
+ bits. Once the first phase is completed, the nodes of a subregion choose a
leader to take the decision based on the following criteria with decreasing
importance: larger number of neighbors, larger remaining energy, and then in
case of equality, larger index. After that, if the sensor node is leader, it
-will solve an integer program (see Section~\ref{cp}). \textcolor{blue}{This
+will solve an integer program (see Section~\ref{cp}). This
integer program contains boolean variables $X_j$ where ($X_j=1$) means that
sensor $j$ will be active in the next sensing phase. Only sensors with enough
remaining energy are involved in the integer program ($J$ is the set of all
send an ActiveSleep packet to each sensor in the same subregion to indicate it
if it has to be active or not. Otherwise, if the sensor is not the leader, it
will wait for the ActiveSleep packet to know its state for the coming sensing
- phase.}
+ phase.
%which provides a set of sensors planned to be
%active in the next sensing phase.
\end{array}
\right.
\end{equation}
-The objective function is a weighted sum of overcoverage and undercoverage. The goal is to limit the overcoverage in order to activate a minimal number of sensors while simultaneously preventing undercoverage. \textcolor{blue}{ By
+The objective function is a weighted sum of overcoverage and undercoverage. The goal is to limit the overcoverage in order to activate a minimal number of sensors while simultaneously preventing undercoverage. By
choosing $w_{U}$ much larger than $w_{\theta}$, the coverage of a
maximum of primary points is ensured. Then for the same number of covered
primary points, the solution with a minimal number of active sensors is
- preferred. }
+ preferred.
%Both weights $w_\theta$ and $w_U$ must be carefully chosen in
%order to guarantee that the maximum number of points are covered during each
%period.
% MODIF - END
-
-
-
-
-
-
\iffalse
\indent Our model is based on the model proposed by \cite{pedraza2006} where the
\parskip 0pt
\begin{figure}[t!]
\centering
- \includegraphics[scale=0.45] {CR.pdf}
+ \includegraphics[scale=0.475] {CR.pdf}
\caption{Coverage ratio}
\label{fig3}
\end{figure}
\begin{figure}[h!]
\centering
-\includegraphics[scale=0.45]{EC.pdf}
+\includegraphics[scale=0.475]{EC.pdf}
\caption{Energy consumption per period}
\label{fig95}
\end{figure}
\begin{figure}[h!]
\centering
-\includegraphics[scale=0.45]{T.pdf}
+\includegraphics[scale=0.475]{T.pdf}
\caption{Execution time in seconds}
\label{fig8}
\end{figure}
Figure~\ref{fig8} shows that DiLCO-32 has very low execution times in comparison
-with other DiLCO versions, because the activity scheduling is tackled by a
+with other DiLCO versions, because the activity scheduling is tackled by a
larger number of leaders and each leader solves an integer problem with a
limited number of variables and constraints. Conversely, DiLCO-2 requires to
solve an optimization problem with half of the network nodes and thus presents a
high execution time. Nevertheless if we refer to Figure~\ref{fig3}, we observe
that DiLCO-32 is slightly less efficient than DilCO-16 to maintain as long as
-possible high coverage. In fact an excessive subdivision of the area of interest
-prevents it to ensure a good coverage especially on the borders of the
+possible high coverage. In fact an excessive subdivision of the area of interest
+prevents it to ensure a good coverage especially on the borders of the
subregions. Thus, the optimal number of subregions can be seen as a trade-off
between execution time and coverage performance.
\begin{figure}[h!]
\centering
-\includegraphics[scale=0.45]{LT.pdf}
+\includegraphics[scale=0.475]{LT.pdf}
\caption{Network lifetime}
\label{figLT95}
\end{figure}
As highlighted by Figure~\ref{figLT95}, when the coverage level is relaxed
($50\%$) the network lifetime also improves. This observation reflects the fact
that the higher the coverage performance, the more nodes must be active to
-ensure the wider monitoring. For a similar level of coverage, DiLCO outperforms
+ensure the wider monitoring. For a similar level of coverage, DiLCO outperforms
DESK and GAF for the lifetime of the network. More specifically, if we focus on
-the larger level of coverage ($95\%$) in the case of our protocol, the subdivision
-in $16$~subregions seems to be the most appropriate.
+the larger level of coverage ($95\%$) in the case of our protocol, the
+subdivision in $16$~subregions seems to be the most appropriate.
\section{\uppercase{Conclusion and future work}}
\label{sec:Conclusion and Future Works}
-A crucial problem in WSN is to schedule the sensing activities of the different
-nodes in order to ensure both coverage of the area of interest and longer
+A crucial problem in WSN is to schedule the sensing activities of the different
+nodes in order to ensure both coverage of the area of interest and longer
network lifetime. The inherent limitations of sensor nodes, in energy provision,
-communication and computing capacities, require protocols that optimize the use
-of the available resources to fulfill the sensing task. To address this
-problem, this paper proposes a two-step approach. Firstly, the field of sensing
+communication and computing capacities, require protocols that optimize the use
+of the available resources to fulfill the sensing task. To address this
+problem, this paper proposes a two-step approach. Firstly, the field of sensing
is divided into smaller subregions using the concept of divide-and-conquer
method. Secondly, a distributed protocol called Distributed Lifetime Coverage
-Optimization is applied in each subregion to optimize the coverage and lifetime
-performances. In a subregion, our protocol consists in electing a leader node
+Optimization is applied in each subregion to optimize the coverage and lifetime
+performances. In a subregion, our protocol consists in electing a leader node
which will then perform a sensor activity scheduling. The challenges include how
-to select the most efficient leader in each subregion and the best
+to select the most efficient leader in each subregion and the best
representative set of active nodes to ensure a high level of coverage. To assess
the performance of our approach, we compared it with two other approaches using
-many performance metrics like coverage ratio or network lifetime. We have also
-studied the impact of the number of subregions chosen to subdivide the area of
+many performance metrics like coverage ratio or network lifetime. We have also
+studied the impact of the number of subregions chosen to subdivide the area of
interest, considering different network sizes. The experiments show that
-increasing the number of subregions improves the lifetime. The more subregions there are, the more robust the network is against random disconnection
-resulting from dead nodes. However, for a given sensing field and network size
-there is an optimal number of subregions. Therefore, in case of our simulation
-context a subdivision in $16$~subregions seems to be the most relevant. The
-optimal number of subregions will be investigated in the future.
+increasing the number of subregions improves the lifetime. The more subregions
+there are, the more robust the network is against random disconnection resulting
+from dead nodes. However, for a given sensing field and network size there is
+an optimal number of subregions. Therefore, in case of our simulation context a
+subdivision in $16$~subregions seems to be the most relevant. The optimal number
+of subregions will be investigated in the future.
\section*{\uppercase{Acknowledgements}}
%\vfill
\bibliographystyle{plain}
{\small
-\bibliography{Example}}
+\bibliography{biblio}}
%\vfill
\end{document}