%% \author[label1,label2]{}
%% \address[label1]{}
%% \address[label2]{}
-\author{Ali Kadhum Idrees, Karine Deschinkel, \\
-Michel Salomon, and Rapha\"el Couturier}
+%\author{Ali Kadhum Idrees, Karine Deschinkel, \\
+%Michel Salomon, and Rapha\"el Couturier}
+
%\thanks{are members in the AND team - DISC department - FEMTO-ST Institute, University of Franche-Comt\'e, Belfort, France.
% e-mail: ali.idness@edu.univ-fcomte.fr, $\lbrace$karine.deschinkel, michel.salomon, raphael.couturier$\rbrace$@univ-fcomte.fr.}% <-this % stops a space
%\thanks{}% <-this % stops a space
-\address{FEMTO-ST Institute, University of Franche-Comt\'e, Belfort, France. \\
-e-mail: ali.idness@edu.univ-fcomte.fr, \\
-$\lbrace$karine.deschinkel, michel.salomon, raphael.couturier$\rbrace$@univ-fcomte.fr.}
+%\address{FEMTO-ST Institute, University of Franche-Comt\'e, Belfort, France. \\
+%e-mail: ali.idness@edu.univ-fcomte.fr, \\
+%$\lbrace$karine.deschinkel, michel.salomon, raphael.couturier$\rbrace$@univ-fcomte.fr.}
+
+
+\author{Ali Kadhum Idrees$^{a,b}$, Karine Deschinkel$^{a}$, \\
+Michel Salomon$^{a}$ and Rapha\"el Couturier $^{a}$ \\
+ $^{a}${\em{FEMTO-ST Institute, UMR 6174 CNRS, \\
+ University Bourgogne Franche-Comt\'e, Belfort, France}} \\
+ $^{b}${\em{Department of Computer Science, University of Babylon, Babylon, Iraq}}
+}
+
\begin{abstract}
%One of the fundamental challenges in Wireless Sensor Networks (WSNs)
Optimization protocol (MuDiLCO) is proposed to maintain the coverage and to
improve the lifetime in wireless sensor networks. The area of interest is first
divided into subregions and then the MuDiLCO protocol is distributed on the
-sensor nodes in each subregion. The proposed MuDiLCO protocol works into periods
+sensor nodes in each subregion. The proposed MuDiLCO protocol works in periods
during which sets of sensor nodes are scheduled to remain active for a number of
rounds during the sensing phase, to ensure coverage so as to maximize the
lifetime of WSN. The decision process is carried out by a leader node, which
\end{abstract}
\begin{keyword}
-Wireless Sensor Networks, Area Coverage, Network lifetime,
+Wireless Sensor Networks, Area Coverage, Network Lifetime,
Optimization, Scheduling, Distributed Computation.
\end{keyword}
\indent The fast developments of low-cost sensor devices and wireless
communications have allowed the emergence of WSNs. A WSN includes a large number
-of small, limited-power sensors that can sense, process and transmit data over a
-wireless communication. They communicate with each other by using multi-hop
+of small, limited-power sensors that can sense, process, and transmit data over
+a wireless communication. They communicate with each other by using multi-hop
wireless communications and cooperate together to monitor the area of interest,
so that each measured data can be reported to a monitoring center called sink
-for further analysis~\cite{Sudip03}. There are several fields of application
+for further analysis~\cite{Sudip03}. There are several fields of application
covering a wide spectrum for a WSN, including health, home, environmental,
military, and industrial applications~\cite{Akyildiz02}.
On the one hand sensor nodes run on batteries with limited capacities, and it is
-often costly or simply impossible to replace and/or recharge batteries,
+often costly or simply impossible to replace and/or recharge batteries,
especially in remote and hostile environments. Obviously, to achieve a long life
-of the network it is important to conserve battery power. Therefore, lifetime
+of the network it is important to conserve battery power. Therefore, lifetime
optimization is one of the most critical issues in wireless sensor networks. On
-the other hand we must guarantee coverage over the area of interest. To fulfill
-these two objectives, the main idea is to take advantage of overlapping sensing
+the other hand we must guarantee coverage over the area of interest. To fulfill
+these two objectives, the main idea is to take advantage of overlapping sensing
regions to turn-off redundant sensor nodes and thus save energy. In this paper,
-we concentrate on the area coverage problem, with the objective of maximizing
-the network lifetime by using an optimized multirounds scheduling.
+we concentrate on the area coverage problem, with the objective of maximizing
+the network lifetime by using an optimized multiround scheduling.
% One of the major scientific research challenges in WSNs, which are addressed by a large number of literature during the last few years is to design energy efficient approaches for coverage and connectivity in WSNs~\cite{conti2014mobile}. The coverage problem is one of the
%fundamental challenges in WSNs~\cite{Nayak04} that consists in monitoring efficiently and continuously
\item Sensors scheduling algorithm implementation, i.e. centralized or
distributed/localized algorithms.
\item The objective of sensor coverage, i.e. to maximize the network lifetime or
- to minimize the number of sensors during the sensing period.
+ to minimize the number of sensors during a sensing round.
\item The homogeneous or heterogeneous nature of the nodes, in terms of sensing
or communication capabilities.
\item The node deployment method, which may be random or deterministic.
-\item Additional requirements for energy-efficient coverage and connected
- coverage.
+\item Additional requirements for energy-efficient and connected coverage.
\end{itemize}
The choice of non-disjoint or disjoint cover sets (sensors participate or not in
many cover sets) can be added to the above list.
% The independency in the cover set (i.e. whether the cover sets are disjoint or non-disjoint) \cite{zorbas2010solving} is another design choice that can be added to the above list.
-\subsection{Centralized Approaches}
+\subsection{Centralized approaches}
+
The major approach is to divide/organize the sensors into a suitable number of
-set covers where each set completely covers an interest region and to activate
-these set covers successively. The centralized algorithms always provide nearly
+cover sets where each set completely covers an interest region and to activate
+these cover sets successively. The centralized algorithms always provide nearly
or close to optimal solution since the algorithm has global view of the whole
network. Note that centralized algorithms have the advantage of requiring very
low processing power from the sensor nodes, which usually have limited
processing capabilities. The main drawback of this kind of approach is its
-higher cost in communications, since the node that will take the decision needs
+higher cost in communications, since the node that will make the decision needs
information from all the sensor nodes. Moreover, centralized approaches usually
suffer from the scalability problem, making them less competitive as the network
size increases.
The first algorithms proposed in the literature consider that the cover sets are
-disjoint: a sensor node appears in exactly one of the generated cover sets~\cite{abrams2004set,cardei2005improving,Slijepcevic01powerefficient}.
-
-
-In the case of non-disjoint algorithms \cite{pujari2011high}, sensors may
+disjoint: a sensor node appears in exactly one of the generated cover
+sets~\cite{abrams2004set,cardei2005improving,Slijepcevic01powerefficient}. In
+the case of non-disjoint algorithms \cite{pujari2011high}, sensors may
participate in more than one cover set. In some cases, this may prolong the
lifetime of the network in comparison to the disjoint cover set algorithms, but
designing algorithms for non-disjoint cover sets generally induces a higher
order of complexity. Moreover, in case of a sensor's failure, non-disjoint
-scheduling policies are less resilient and less reliable because a sensor may be
-involved in more than one cover sets. For instance, the proposed work in ~\cite{cardei2005energy, berman04}
-
-
+scheduling policies are less resilient and reliable because a sensor may be
+involved in more than one cover sets.
+%For instance, the proposed work in ~\cite{cardei2005energy, berman04}
-
-In~\cite{yang2014maximum}, the authors have proposed a linear programming
-approach for selecting the minimum number of working sensor nodes, in order to
-as to preserve a maximum coverage and extend lifetime of the network. Cheng et
+In~\cite{yang2014maximum}, the authors have considered a linear programming
+approach to select the minimum number of working sensor nodes, in order to
+preserve a maximum coverage and to extend lifetime of the network. Cheng et
al.~\cite{cheng2014energy} have defined a heuristic algorithm called Cover Sets
-Balance (CSB), which choose a set of active nodes using the tuple (data coverage
-range, residual energy). Then, they have introduced a new Correlated Node Set
-Computing (CNSC) algorithm to find the correlated node set for a given node.
-After that, they proposed a High Residual Energy First (HREF) node selection
-algorithm to minimize the number of active nodes so as to prolong the network
-lifetime. Various centralized methods based on column generation approaches have
-also been proposed~\cite{castano2013column,rossi2012exact,deschinkel2012column}.
-
-
-
-
+Balance (CSB), which chooses a set of active nodes using the tuple (data
+coverage range, residual energy). Then, they have introduced a new Correlated
+Node Set Computing (CNSC) algorithm to find the correlated node set for a given
+node. After that, they proposed a High Residual Energy First (HREF) node
+selection algorithm to minimize the number of active nodes so as to prolong the
+network lifetime. Various centralized methods based on column generation
+approaches have also been
+proposed~\cite{castano2013column,rossi2012exact,deschinkel2012column}.
\subsection{Distributed approaches}
%{\bf Distributed approaches}
WSNs. Localized and distributed algorithms generally result in non-disjoint set
covers.
-Some distributed algorithms have been developed
-in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02, yardibi2010distributed, prasad2007distributed,Misra}
-to perform the scheduling so as to preserve coverage. Distributed algorithms
-typically operate in rounds for a predetermined duration. At the beginning of
-each round, a sensor exchanges information with its neighbors and makes a
-decision to either remain turned on or to go to sleep for the round. This
-decision is basically made on simple greedy criteria like the largest uncovered
-area \cite{Berman05efficientenergy} or maximum uncovered targets
-\cite{lu2003coverage}. The authors in \cite{yardibi2010distributed} have developed a Distributed
-Adaptive Sleep Scheduling Algorithm (DASSA) for WSNs with partial coverage.
-DASSA does not require location information of sensors while maintaining
-connectivity and satisfying a user defined coverage target. In DASSA, nodes use
-the residual energy levels and feedback from the sink for scheduling the
-activity of their neighbors. This feedback mechanism reduces the randomness in
-scheduling that would otherwise occur due to the absence of location
-information. In \cite{ChinhVu}, the author have proposed a novel distributed
-heuristic, called Distributed Energy-efficient Scheduling for k-coverage (DESK),
-which ensures that the energy consumption among the sensors is balanced and the
-lifetime maximized while the coverage requirement is maintained. This heuristic
-works in rounds, requires only one-hop neighbor information, and each sensor
-decides its status (active or sleep) based on the perimeter coverage model
-proposed in \cite{Huang:2003:CPW:941350.941367}.
+Many distributed algorithms have been developed to perform the scheduling so as
+to preserve coverage, see for example
+\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02, yardibi2010distributed,
+ prasad2007distributed,Misra}. Distributed algorithms typically operate in
+rounds for a predetermined duration. At the beginning of each round, a sensor
+exchanges information with its neighbors and makes a decision to either remain
+turned on or to go to sleep for the round. This decision is basically made on
+simple greedy criteria like the largest uncovered area
+\cite{Berman05efficientenergy} or maximum uncovered targets
+\cite{lu2003coverage}. The Distributed Adaptive Sleep Scheduling Algorithm
+(DASSA) \cite{yardibi2010distributed} does not require location information of
+sensors while maintaining connectivity and satisfying a user defined coverage
+target. In DASSA, nodes use the residual energy levels and feedback from the
+sink for scheduling the activity of their neighbors. This feedback mechanism
+reduces the randomness in scheduling that would otherwise occur due to the
+absence of location information. In \cite{ChinhVu}, the author have designed a
+novel distributed heuristic, called Distributed Energy-efficient Scheduling for
+k-coverage (DESK), which ensures that the energy consumption among the sensors
+is balanced and the lifetime maximized while the coverage requirement is
+maintained. This heuristic works in rounds, requires only one-hop neighbor
+information, and each sensor decides its status (active or sleep) based on the
+perimeter coverage model from~\cite{Huang:2003:CPW:941350.941367}.
%Our Work, which is presented in~\cite{idrees2014coverage} proposed a coverage optimization protocol to improve the lifetime in
%heterogeneous energy wireless sensor networks.
%In this work, the coverage protocol distributed in each sensor node in the subregion but the optimization take place over the the whole subregion. We consider only distributing the coverage protocol over two subregions.
-The works presented in \cite{Bang, Zhixin, Zhang} focuses on coverage-aware,
+The works presented in \cite{Bang, Zhixin, Zhang} focus on coverage-aware,
distributed energy-efficient, and distributed clustering methods respectively,
-which aims to extend the network lifetime, while the coverage is ensured. More recently, Shibo et al. \cite{Shibo} have expressed the coverage
-problem as a minimum weight submodular set cover problem and proposed a
-Distributed Truncated Greedy Algorithm (DTGA) to solve it. They take advantage
-from both temporal and spatial correlations between data sensed by different
-sensors, and leverage prediction, to improve the lifetime. In
-\cite{xu2001geography}, Xu et al. have proposed an algorithm, called
-Geographical Adaptive Fidelity (GAF), which uses geographic location information
-to divide the area of interest into fixed square grids. Within each grid, it
-keeps only one node staying awake to take the responsibility of sensing and
-communication.
+which aim at extending the network lifetime, while the coverage is ensured.
+More recently, Shibo et al. \cite{Shibo} have expressed the coverage problem as
+a minimum weight submodular set cover problem and proposed a Distributed
+Truncated Greedy Algorithm (DTGA) to solve it. They take advantage from both
+temporal and spatial correlations between data sensed by different sensors, and
+leverage prediction, to improve the lifetime. In \cite{xu2001geography}, Xu et
+al. have described an algorithm, called Geographical Adaptive Fidelity (GAF),
+which uses geographic location information to divide the area of interest into
+fixed square grids. Within each grid, it keeps only one node staying awake to
+take the responsibility of sensing and communication.
Some other approaches (outside the scope of our work) do not consider a
-synchronized and predetermined period of time where the sensors are active or
-not. Indeed, each sensor maintains its own timer and its wake-up time is
-randomized \cite{Ye03} or regulated \cite{cardei2005maximum} over time.
+synchronized and predetermined time-slot where the sensors are active or not.
+Indeed, each sensor maintains its own timer and its wake-up time is randomized
+\cite{Ye03} or regulated \cite{cardei2005maximum} over time.
-The MuDiLCO protocol (for Multiround Distributed Lifetime Coverage Optimization
+The MuDiLCO protocol (for Multiround Distributed Lifetime Coverage Optimization
protocol) presented in this paper is an extension of the approach introduced
in~\cite{idrees2014coverage}. In~\cite{idrees2014coverage}, the protocol is
deployed over only two subregions. Simulation results have shown that it was
computation complexity. Compared to our previous paper, in this one we study the
possibility of dividing the sensing phase into multiple rounds and we also add
an improved model of energy consumption to assess the efficiency of our
-approach.
-
-
-
-
+approach. In fact, in this paper we make a multiround optimization, while it was
+a single round optimization in our previous work.
\iffalse
cover sets, both Static-CCF and Dynamic-CCF algorithms, where CCF means that
they use a cost function called Critical Control Factor, provide cover sets
offering longer network lifetime than those produced by \cite{cardei2005energy}.
-Also, they require a smaller number of node participations in order to achieve
+Also, they require a smaller number of participating nodes in order to achieve
these results.
In the case of non-disjoint algorithms \cite{pujari2011high}, sensors may
lifetime. Various centralized methods based on column generation approaches have
also been proposed~\cite{castano2013column,rossi2012exact,deschinkel2012column}.
-
-
\subsection{Distributed approaches}
%{\bf Distributed approaches}
In distributed and localized coverage algorithms, the required computation to
WSNs. Localized and distributed algorithms generally result in non-disjoint set
covers.
-Some distributed algorithms have been developed
-in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02, yardibi2010distributed}
-to perform the scheduling so as to preserve coverage. Distributed algorithms
-typically operate in rounds for a predetermined duration. At the beginning of
-each round, a sensor exchanges information with its neighbors and makes a
-decision to either remain turned on or to go to sleep for the round. This
-decision is basically made on simple greedy criteria like the largest uncovered
-area \cite{Berman05efficientenergy} or maximum uncovered targets
-\cite{lu2003coverage}. In \cite{Tian02}, the scheduling scheme is divided into
-rounds, where each round has a self-scheduling phase followed by a sensing
-phase. Each sensor broadcasts a message containing the node~ID and the node
-location to its neighbors at the beginning of each round. A sensor determines
-its status by a rule named off-duty eligible rule, which tells him to turn off
-if its sensing area is covered by its neighbors. A back-off scheme is introduced
-to let each sensor delay the decision process with a random period of time, in
-order to avoid simultaneous conflicting decisions between nodes and lack of
-coverage on any area. In \cite{prasad2007distributed} a model for capturing the
-dependencies between different cover sets is defined and it proposes localized
-heuristic based on this dependency. The algorithm consists of two phases, an
-initial setup phase during which each sensor computes and prioritizes the covers
-and a sensing phase during which each sensor first decides its on/off status,
-and then remains on or off for the rest of the duration.
+Many distributed algorithms have been developed to perform the scheduling so as
+to preserve coverage, see for example
+\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02,yardibi2010distributed}.
+Distributed algorithms typically operate in rounds for a predetermined
+duration. At the beginning of each round, a sensor exchanges information with
+its neighbors and makes a decision to either remain turned on or to go to sleep
+for the round. This decision is basically made on simple greedy criteria like
+the largest uncovered area \cite{Berman05efficientenergy} or maximum uncovered
+targets \cite{lu2003coverage}. In \cite{Tian02}, the scheduling scheme is
+divided into rounds, where each round has a self-scheduling phase followed by a
+sensing phase. Each sensor broadcasts a message containing the node~ID and the
+node location to its neighbors at the beginning of each round. A sensor
+determines its status by a rule named off-duty eligible rule, which tells him to
+turn off if its sensing area is covered by its neighbors. A back-off scheme is
+introduced to let each sensor delay the decision process with a random period of
+time, in order to avoid simultaneous conflicting decisions between nodes and
+lack of coverage on any area. In \cite{prasad2007distributed} a model for
+capturing the dependencies between different cover sets is defined and it
+proposes localized heuristic based on this dependency. The algorithm consists of
+two phases, an initial setup phase during which each sensor computes and
+prioritizes the covers and a sensing phase during which each sensor first
+decides its on/off status, and then remains on or off for the rest of the
+duration.
The authors in \cite{yardibi2010distributed} have developed a Distributed
Adaptive Sleep Scheduling Algorithm (DASSA) for WSNs with partial coverage.
the residual energy levels and feedback from the sink for scheduling the
activity of their neighbors. This feedback mechanism reduces the randomness in
scheduling that would otherwise occur due to the absence of location
-information. In \cite{ChinhVu}, the author have proposed a novel distributed
+information. In \cite{ChinhVu}, the author have proposed a novel distributed
heuristic, called Distributed Energy-efficient Scheduling for k-coverage (DESK),
which ensures that the energy consumption among the sensors is balanced and the
lifetime maximized while the coverage requirement is maintained. This heuristic
%heterogeneous energy wireless sensor networks.
%In this work, the coverage protocol distributed in each sensor node in the subregion but the optimization take place over the the whole subregion. We consider only distributing the coverage protocol over two subregions.
-The works presented in \cite{Bang, Zhixin, Zhang} focuses on coverage-aware,
+The works presented in \cite{Bang, Zhixin, Zhang} focus on coverage-aware,
distributed energy-efficient, and distributed clustering methods respectively,
-which aims to extend the network lifetime, while the coverage is ensured. S.
+which aim to extend the network lifetime, while the coverage is ensured. S.
Misra et al. \cite{Misra} have proposed a localized algorithm for coverage in
sensor networks. The algorithm conserve the energy while ensuring the network
coverage by activating the subset of sensors with the minimum overlap area. The
communication range satisfies $R_c \geq 2R_s$. In fact, Zhang and
Zhou~\cite{Zhang05} proved that if the transmission range fulfills the previous
hypothesis, a complete coverage of a convex area implies connectivity among the
-working nodes in the active mode.
+active nodes.
Instead of working with a continuous coverage area, we make it discrete by
considering for each sensor a set of points called primary points. Consequently,
we assume that the sensing disk defined by a sensor is covered if all of its
-primary points are covered. The choice of number and locations of primary points
-is the subject of another study not presented here.
+primary points are covered. The choice of number and locations of primary points is the subject of another study not presented here.
%By knowing the position (point center: ($p_x,p_y$)) of a wireless
%sensor node and its $R_s$, we calculate the primary points directly
As can be seen in Figure~\ref{fig2}, our protocol works in periods fashion,
where each is divided into 4 phases: Information~Exchange, Leader~Election,
Decision, and Sensing. Each sensing phase may be itself divided into $T$ rounds
-and for each round a set of sensors (said a cover set) is responsible for the
-sensing task. A multiround optimization process executed in each period after information exchange and leader election in order to produce a $T$ cover sets of sensors to take the mission of sensing for $T$ rounds.
+and for each round a set of sensors (a cover set) is responsible for the sensing
+task. In this way a multiround optimization process is performed during each
+period after Information~Exchange and Leader~Election phases, in order to
+produce $T$ cover sets that will take the mission of sensing for $T$ rounds.
\begin{figure}[ht!]
\centering \includegraphics[width=100mm]{Modelgeneral.pdf} % 70mm
\caption{The MuDiLCO protocol scheme executed on each node}
% set cover responsible for the sensing task.
%For each round a set of sensors (said a cover set) is responsible for the sensing task.
-This protocol is reliable against an unexpected node failure, because it works
-in periods.
+This protocol minimizes the impact of unexpected node failure (not due to batteries
+running out of energy), because it works in periods.
+%This protocol is reliable against an unexpected node failure, because it works in periods.
%%RC : why? I am not convinced
On the one hand, if a node failure is detected before making the
decision, the node will not participate to this phase, and, on the other hand,
will be temporarily affected: only during the period of sensing until a new
period starts.
%%RC so if there are at least one failure per period, the coverage is bad...
+%%MS if we want to be reliable against many node failures we need to have an
+%% overcoverage...
The energy consumption and some other constraints can easily be taken into
account, since the sensors can update and then exchange their information
\item LISTENING: sensor node is waiting for a decision (to be active or not);
\item COMPUTATION: sensor node has been elected as leader and applies the
optimization process;
-\item ACTIVE: sensor node participate to the monitoring of the area;
+\item ACTIVE: sensor node is taking part in the monitoring of the area;
\item SLEEP: sensor node is turned off to save energy;
\item COMMUNICATION: sensor node is transmitting or receiving packet.
\end{enumerate}
will be responsible for executing the coverage algorithm. Each subregion in the
area of interest will select its own WSNL independently for each period. All
the sensor nodes cooperate to elect a WSNL. The nodes in the same subregion
-will select the leader based on the received informations from all other nodes
+will select the leader based on the received information from all other nodes
in the same subregion. The selection criteria are, in order of importance:
larger number of neighbors, larger remaining energy, and then in case of
equality, larger index. Observations on previous simulations suggest to use the
authors proposed an integer program which forces undercoverage and overcoverage
of targets to become minimal at the same time. They use binary variables
$x_{jl}$ to indicate if sensor $j$ belongs to cover set $l$. In our model, we
-consider binary variables $X_{t,j}$ to determine the possibility of activation
-of sensor $j$ during the round $t$ of a given sensing phase. We also consider
-primary points as targets. The set of primary points is denoted by $P$ and the
-set of sensors by $J$. Only sensors able to be alive during at least one round
-are involved in the integer program.
+consider binary variables $X_{t,j}$ to determine the possibility of activating
+sensor $j$ during round $t$ of a given sensing phase. We also consider primary
+points as targets. The set of primary points is denoted by $P$ and the set of
+sensors by $J$. Only sensors able to be alive during at least one round are
+involved in the integer program.
%parler de la limite en energie Et pour un round
\label{eq13}
\end{equation}
More precisely, $\Theta_{t,p}$ represents the number of active sensor nodes
-minus one that cover the primary point $p$ during the round $t$. The
+minus one that cover the primary point $p$ during round $t$. The
Undercoverage variable $U_{t,p}$ of the primary point $p$ during round $t$ is
defined by:
\begin{equation}
%%RC why W_{\theta} is not defined (only one sentence)? How to define in practice Wtheta and Wu?
-
\begin{itemize}
\item $X_{t,j}$: indicates whether or not the sensor $j$ is actively sensing
- during the round $t$ (1 if yes and 0 if not);
+ during round $t$ (1 if yes and 0 if not);
\item $\Theta_{t,p}$ - {\it overcoverage}: the number of sensors minus one that
- are covering the primary point $p$ during the round $t$;
+ are covering the primary point $p$ during round $t$;
\item $U_{t,p}$ - {\it undercoverage}: indicates whether or not the primary
- point $p$ is being covered during the round $t$ (1 if not covered and 0 if
+ point $p$ is being covered during round $t$ (1 if not covered and 0 if
covered).
\end{itemize}
points in order to activate a minimum number of sensors. Second we prevent the
absence of monitoring on some parts of the subregion by minimizing the
undercoverage. The weights $W_\theta$ and $W_U$ must be properly chosen so as
-to guarantee that the maximum number of points are covered during each round. In
-our simulations priority is given to the coverage by choosing $W_{\theta}$ very
-large compared to $W_U$.
+to guarantee that the maximum number of points are covered during each round.
+%% MS W_theta is smaller than W_u => problem with the following sentence
+In our simulations priority is given to the coverage by choosing $W_{U}$ very
+large compared to $W_{\theta}$.
%The Active-Sleep packet includes the schedule vector with the number of rounds that should be applied by the receiving sensor node during the sensing phase.
\subsection{Sensing phase}
\end{algorithm}
+%\textcolor{red}{\textbf{\textsc{Answer:} ali }}
+
+
+\section{Genetic Algorithm (GA) for Multiround Lifetime Coverage Optimization}
+\label{GA}
+Metaheuristics are a generic search strategies for exploring search spaces for solving the complex problems. These strategies have to dynamically balance between the exploitation of the accumulated search experience and the exploration of the search space. On one hand, this balance can find regions in the search space with high-quality solutions. On the other hand, it prevents waste too much time in regions of the search space which are either already explored or don’t provide high-quality solutions. Therefore, metaheuristic provides an enough good solution to an optimization problem, especially with incomplete information or limited computation capacity \cite{bianchi2009survey}. Genetic Algorithm (GA) is one of the population-based metaheuristic methods that simulates the process of natural selection \cite{hassanien2015applications}. GA starts with a population of random candidate solutions (called individuals or phenotypes) . GA uses genetic operators inspired by natural evolution, such as selection, mutation, evaluation, crossover, and replacement so as to improve the initial population of candidate solutions. This process repeated until a stopping criterion is satisfied.
+
+In this section, we present a metaheuristic based GA to solve our multiround lifetime coverage optimization problem. The proposed GA provides a near optimal sechedule for multiround sensing per period. The proposed GA is based on the mathematical model which is presented in Section \ref{pd}. Algorithm \ref{alg:GA} shows the proposed GA to solve the coverage lifetime optimization problem. We named the new protocol which is based on GA in the decision phase as GA-MuDiLCO. The proposed GA can be explained in more details as follow:
+
+\begin{algorithm}[h!]
+ \small
+ \SetKwInput{Input}{Input}
+ \SetKwInput{Output}{Output}
+ \Input{ $ P, J, T, S_{pop}, \alpha_{j,p}^{ind}, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind}, Child_{t,j}^{ind}, Ch.\Theta_{t,p}^{ind}, Ch.U_{t,p}^{ind_1}$}
+ \Output{$\left\{\left(X_{1,1},\dots, X_{t,j}, \dots, X_{T,J}\right)\right\}_{t \in T, j \in J}$}
+
+ \BlankLine
+ %\emph{Initialize the sensor node and determine it's position and subregion} \;
+ \ForEach {Individual $ind$ $\in$ $S_{pop}$} {
+ \emph{Generate Randomly Chromosome $\left\{\left(X_{1,1},\dots, X_{t,j}, \dots, X_{T,J}\right)\right\}_{t \in T, j \in J}$}\;
+
+ \emph{Update O-U-Coverage $\left\{(P, J, \alpha_{j,p}^{ind}, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind})\right\}_{p \in P}$}\;
+
+
+ \emph{Evaluate Individual $(P, J, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind})$}\;
+ }
+
+ \While{ Stopping criteria is not satisfied }{
+
+ \emph{Selection $(ind_1, ind_2)$}\;
+ \emph{Crossover $(P_c, X_{t,j}^{ind_1}, X_{t,j}^{ind_2}, Child_{t,j}^{ind_1}, Child_{t,j}^{ind_2})$}\;
+ \emph{Mutation $(P_m, Child_{t,j}^{ind_1}, Child_{t,j}^{ind_2})$}\;
+
+
+ \emph{Update O-U-Coverage $(P, J, \alpha_{j,p}^{ind}, Child_{t,j}^{ind_1}, Ch.\Theta_{t,p}^{ind_1}, Ch.U_{t,p}^{ind_1})$}\;
+ \emph{Update O-U-Coverage $(P, J, \alpha_{j,p}^{ind}, Child_{t,j}^{ind_2}, Ch.\Theta_{t,p}^{ind_2}, Ch.U_{t,p}^{ind_2})$}\;
+
+\emph{Evaluate New Individual$(P, J, Child_{t,j}^{ind_1}, Ch.\Theta_{t,p}^{ind_1}, Ch.U_{t,p}^{ind_1})$}\;
+ \emph{Replacement $(P, J, T, Child_{t,j}^{ind_1}, Ch.\Theta_{t,p}^{ind_1}, Ch.U_{t,p}^{ind_1}, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind} )$ }\;
+
+ \emph{Evaluate New Individual$(P, J, Child_{t,j}^{ind_2}, Ch.\Theta_{t,p}^{ind_2}, Ch.U_{t,p}^{ind_2})$}\;
+
+ \emph{Replacement $(P, J, T, Child_{t,j}^{ind_2}, Ch.\Theta_{t,p}^{ind_2}, Ch.U_{t,p}^{ind_2}, X_{t,j}^{ind}, \Theta_{t,p}^{ind}, U_{t,p}^{ind} )$ }\;
+
+
+ }
+ \emph{$\left\{\left(X_{1,1},\dots,X_{t,j},\dots,X_{T,J}\right)\right\}$ =
+ Select Best Solution ($S_{pop}$)}\;
+ \emph{return X} \;
+\caption{GA-MuDiLCO($s_j$)}
+\label{alg:GA}
+
+\end{algorithm}
+
+
+\begin{enumerate} [I)]
+\item \textbf{Representation:} Since the proposed GA's goal is to find the optimal schedule of the sensor nodes which take the responsibility of monitoring the subregion for $T$ rounds in the next phase, the chromosome is defined as a schedule for alive sensors and each chromosome contains $T$ rounds. Each round in the schedule includes J genes, the total alive sensors in the subregion. Therefore, the gene of such a chromosome is a schedule of a sensor. In other words, The genes corresponding to active nodes have the value of one, the others are zero. Figure \ref{chromo} shows solution representation in the proposed GA.
+%[scale=0.3]
+\begin{figure}[h!]
+\centering
+ \includegraphics [scale=0.35] {rep.eps}
+\caption{Candidate Solution representation by the proposed GA. }
+\label{chromo}
+\end{figure}
+
+
+
+\item \textbf{Initialize Population:} The initial population is randomly generated and each chromosome in the GA population represents a possible sensors schedule solution to cover the entire subregion for $T$ rounds during current period. Each sensor in the chromosome is given a random value (0 or 1) for all rounds. If the random value is 1, the remaining energy of this sensor should be adequate to activate this sensor during current round. Otherwise, the value is set to 0. The energy constraint is applied for each sensor during all rounds.
+
+
+\item \textbf{Update O-U-Coverage:}
+After creating the initial population, The overcoverage $\Theta_{t,p}$ and undercoverage $U_{t,p}$ for each candidate solution are computed (see Algorithm \ref{OU}) so as to use them in the next step.
+
+\begin{algorithm}[h!]
+
+ \SetKwInput{Input}{Input}
+ \SetKwInput{Output}{Output}
+ \Input{ parameters $P, J, ind, \alpha_{j,p}^{ind}, X_{t,j}^{ind}$}
+ \Output{$U^{ind} = \left\lbrace U_{1,1}^{ind}, \dots, U_{t,p}^{ind}, \dots, U_{T,P}^{ind} \right\rbrace$ and $\Theta^{ind} = \left\lbrace \Theta_{1,1}^{ind}, \dots, \Theta_{t,p}^{ind}, \dots, \Theta_{T,P}^{ind} \right\rbrace$}
+
+ \BlankLine
+
+ \For{$t\leftarrow 1$ \KwTo $T$}{
+ \For{$p\leftarrow 1$ \KwTo $P$}{
+
+ % \For{$i\leftarrow 0$ \KwTo $I_j$}{
+ \emph{$SUM\leftarrow 0$}\;
+ \For{$j\leftarrow 1$ \KwTo $J$}{
+ \emph{$SUM \leftarrow SUM + (\alpha_{j,p}^{ind} \times X_{t,j}^{ind})$ }\;
+ }
+
+ \If { SUM = 0} {
+ \emph{$U_{t,p}^{ind} \leftarrow 0$}\;
+ \emph{$\Theta_{t,p}^{ind} \leftarrow 1$}\;
+ }
+ \Else{
+ \emph{$U_{t,p}^{ind} \leftarrow SUM -1$}\;
+ \emph{$\Theta_{t,p}^{ind} \leftarrow 0$}\;
+ }
+
+ }
+
+ }
+\emph{return $U^{ind}, \Theta^{ind}$ } \;
+\caption{O-U-Coverage}
+\label{OU}
+
+\end{algorithm}
+
+
+
+\item \textbf{Evaluate Population:}
+After creating the initial population, each individual is evaluated and assigned a fitness value according to the fitness function is illustrated in Eq. \eqref{eqf}. In the proposed GA, the optimal (or near optimal) candidate solution, is the one with the minimum value for the fitness function. The lower the fitness values been assigned to an individual, the better opportunity it get survived. In our works, the function rewards the decrease in the sensor nodes which cover the same primary point and penalizes the decrease to zero in the sensor nodes which cover the primary point.
+
+\begin{equation}
+ F^{ind} \leftarrow \sum_{t=1}^{T} \sum_{p=1}^{P} \left(W_{\theta}* \Theta_{t,p} + W_{U} * U_{t,p} \right) \label{eqf}
+\end{equation}
+
+
+\item \textbf{Selection:} In order to generate a new generation, a portion of the existing population is elected based on a fitness function that ranks the fitness of each candidate solution and preferentially select the best solutions. Two parents should be selected to the mating pool. In the proposed GA-MuDiLCO algorithm, the first parent is selected by using binary tournament selection to select one of the parents \cite{goldberg1991comparative}. In this method, two individuals are chosen at random from population and the better of the two
+individuals is selected. If they have similar fitness values, one of them will be selected randomly. The best individual in the population is selected as a second parent.
+
+
+
+\item \textbf{Crossover:} Crossover is a genetic operator used to take more than one parent solutions and produce a child solution from them. If crossover probability $P_c$ is 100$\%$, then the crossover operation takes place between two individuals. If it is 0$\%$, the two selected individuals in the mating pool will be the new chromosomes without crossover. In the proposed GA, a two-point crossover is used. Figure \ref{cross} gives an example for a two-point crossover for 8 sensors in the subregion and the schedule for 3 rounds.
+
+
+\begin{figure}[h!]
+\centering
+ \includegraphics [scale = 0.3] {crossover.eps}
+\caption{Two-point crossover. }
+\label{cross}
+\end{figure}
+
+
+\item \textbf{Mutation:}
+Mutation is a divergence operation which introduces random modifications. The purpose of the mutation is to maintain diversity within the population and prevent premature convergence. Mutation is used to add new genetic information (divergence) in order to achieve a global search over the solution search space and avoid to fall in local optima. The mutation oprator in the proposed GA-MuDiLCO works as follow: If mutation probability $P_m$ is 100$\%$, then the mutation operation takes place on the the new individual. The round number is selected randomly within (1..T) in the schedule solution. After that one sensor within this round is selected randomly within (1..J). If the sensor is scheduled as active "1", it should be rescheduled to sleep "0". If the sensor is scheduled as sleep, it rescheduled to active only if it has adequate remaining energy.
+
+
+\item \textbf{Update O-U-Coverage for children:}
+Before evalute each new individual, Algorithm \ref{OU} is called for each new individual to compute the new undercoverage $Ch.U$ and overcoverage $Ch.\Theta$ parameters.
+
+\item \textbf{Evaluate New Individuals:}
+Each new individual is evaluated using Eq. \ref{eqf} but with using the new undercoverage $Ch.U$ and overcoverage $Ch.\Theta$ parameters of the new children.
+
+\item \textbf{Replacement:}
+After evaluatation of new children, Triple Tournament Replacement (TTR) will be applied for each new individual. In TTR strategy, three individuals are selected
+randomly from the population. Find the worst from them and then check its fitness with the new individual fitness. If the fitness of the new individual is better than the fitness of the worst individual, replace the new individual with the worst individual. Otherwise, the replacement is not done.
+
+
+\item \textbf{Stopping criteria:}
+The proposed GA-MuDiLCO stops when the stopping criteria is met. It stops after running for an amount of time in seconds equal to \textbf{Time limit}. The \textbf{Time limit} is the execution time obtained by the optimization solver GLPK for solving the same size of problem divided by two. The best solution will be selected as a schedule of sensors for $T$ rounds during the sensing phase in the current period.
+
+
+
+\end{enumerate}
+
+
+
\section{Experimental study}
\label{exp}
\subsection{Simulation setup}
25 runs.
%Based on the results of our proposed work in~\cite{idrees2014coverage}, we found as the region of interest are divided into larger subregions as the network lifetime increased. In this simulation, the network are divided into 16 subregions.
We performed simulations for five different densities varying from 50 to
-250~nodes. Experimental results are obtained from randomly generated networks in
-which nodes are deployed over a $50 \times 25~m^2 $ sensing field. More
+250~nodes deployed over a $50 \times 25~m^2 $ sensing field. More
precisely, the deployment is controlled at a coarse scale in order to ensure
that the deployed nodes can cover the sensing field with the given sensing
range.
$E_{R}$ & 36 Joules\\
$R_s$ & 5~m \\
%\hline
-$w_{\Theta}$ & 1 \\
+$W_{\theta}$ & 1 \\
% [1ex] adds vertical space
%\hline
-$w_{U}$ & $|P^2|$
+$W_{U}$ & $|P|^2$
%inserts single line
\end{tabular}
\label{table3}
Our protocol is declined into four versions: MuDiLCO-1, MuDiLCO-3, MuDiLCO-5,
and MuDiLCO-7, corresponding respectively to $T=1,3,5,7$ ($T$ the number of
-rounds in one sensing period). In the following, the general case will be
-denoted by MuDiLCO-T and we will make comparisons with two other methods. The
-first method, called DESK and proposed by \cite{ChinhVu}, is a full distributed
-coverage algorithm. The second method, called GAF~\cite{xu2001geography},
-consists in dividing the region into fixed squares. During the decision phase,
-in each square, one sensor is then chosen to remain active during the sensing
-phase time.
+rounds in one sensing period). In the following, we will make comparisons with
+two other methods. The first method, called DESK and proposed by \cite{ChinhVu},
+is a full distributed coverage algorithm. The second method, called
+GAF~\cite{xu2001geography}, consists in dividing the region into fixed squares.
+During the decision phase, in each square, one sensor is then chosen to remain
+active during the sensing phase time.
Some preliminary experiments were performed to study the choice of the number of
-subregions which subdivide the sensing field, considering different network
+subregions which subdivides the sensing field, considering different network
sizes. They show that as the number of subregions increases, so does the network
-lifetime. Moreover, it makes the MuDiLCO-T protocol more robust against random
-network disconnection due to node failures. However, too much subdivisions
-reduces the advantage of the optimization. In fact, there is a balance between
+lifetime. Moreover, it makes the MuDiLCO protocol more robust against random
+network disconnection due to node failures. However, too many subdivisions
+reduce the advantage of the optimization. In fact, there is a balance between
the benefit from the optimization and the execution time needed to solve
-it. Therefore, we have set the number of subregions to 16 rather than 32.
+it. Therefore, we have set the number of subregions to 16 rather than 32.
-\subsection{Energy Model}
+\subsection{Energy model}
We use an energy consumption model proposed by~\cite{ChinhVu} and based on
\cite{raghunathan2002energy} with slight modifications. The energy consumption
uses an Atmels AVR ATmega103L microcontroller~\cite{raghunathan2002energy}. The
typical architecture of a sensor is composed of four subsystems: the MCU
subsystem which is capable of computation, communication subsystem (radio) which
-is responsible for transmitting/receiving messages, sensing subsystem that
+is responsible for transmitting/receiving messages, the sensing subsystem that
collects data, and the power supply which powers the complete sensor node
\cite{raghunathan2002energy}. Each of the first three subsystems can be turned
on or off depending on the current status of the sensor. Energy consumption
(expressed in milliWatt per second) for the different status of the sensor is
-summarized in Table~\ref{table4}. The energy needed to send or receive a 1-bit
-packet is equal to $0.2575~mW$.
+summarized in Table~\ref{table4}.
\begin{table}[ht]
\caption{The Energy Consumption Model}
For the sake of simplicity we ignore the energy needed to turn on the radio, to
start up the sensor node, to move from one status to another, etc.
%We also do not consider the need of collecting sensing data. PAS COMPRIS
-Thus, when a sensor becomes active (i.e., it already decides its status), it can
+Thus, when a sensor becomes active (i.e., it has already chosen its status), it can
turn its radio off to save battery. MuDiLCO uses two types of packets for
communication. The size of the INFO packet and Active-Sleep packet are 112~bits
and 24~bits respectively. The value of energy spent to send a 1-bit-content
message is obtained by using the equation in ~\cite{raghunathan2002energy} to
calculate the energy cost for transmitting messages and we propose the same
-value for receiving the packets.
+value for receiving the packets. The energy needed to send or receive a 1-bit
+packet is equal to 0.2575~mW.
The initial energy of each node is randomly set in the interval $[500;700]$. A
sensor node will not participate in the next round if its remaining energy is
\begin{enumerate}[i]
-\item {{\bf Coverage Ratio (CR)}:} the coverage ratio measures how much the area
+\item {{\bf Coverage Ratio (CR)}:} the coverage ratio measures how much of the area
of a sensor field is covered. In our case, the sensing field is represented as
- a connected grid of points and we use each grid point as a sample point for
- calculating the coverage. The coverage ratio can be calculated by:
+ a connected grid of points and we use each grid point as a sample point to
+ compute the coverage. The coverage ratio can be calculated by:
\begin{equation*}
\scriptsize
\mbox{CR}(\%) = \frac{\mbox{$n^t$}}{\mbox{$N$}} \times 100,
\end{equation*}
where $n^t$ is the number of covered grid points by the active sensors of all
-subregions during round $t$ in the current sensing phase and $N$ is total number
+subregions during round $t$ in the current sensing phase and $N$ is the total number
of grid points in the sensing field of the network. In our simulations $N = 51
\times 26 = 1326$ grid points.
%The accuracy of this method depends on the distance between grids. In our
\end{equation*}
where $A_r^t$ is the number of active sensors in the subregion $r$ during round
$t$ in the current sensing phase, $|J|$ is the total number of sensors in the
-network, and $R$ is the total number of the subregions in the network.
+network, and $R$ is the total number of subregions in the network.
\item {{\bf Network Lifetime}:} we define the network lifetime as the time until
the coverage ratio drops below a predefined threshold. We denote by
- $Lifetime_{95}$ (respectively $Lifetime_{50}$) as the amount of time during
+ $Lifetime_{95}$ (respectively $Lifetime_{50}$) the amount of time during
which the network can satisfy an area coverage greater than $95\%$
(respectively $50\%$). We assume that the network is alive until all nodes have
been drained of their energy or the sensor network becomes
seen as the total energy consumed by the sensors during the $Lifetime_{95}$ or
$Lifetime_{50}$ divided by the number of rounds. EC can be computed as
follows:
- \begin{equation*}
-\scriptsize
-\mbox{EC} = \frac{\sum\limits_{m=1}^{M_L} \left( E^{\mbox{com}}_m+E^{\mbox{list}}_m+E^{\mbox{comp}}_m \right) +
- \sum\limits_{t=1}^{T_L} \left( E^{a}_t+E^{s}_t \right)}{T_L},
-\end{equation*}
+ % New version with global loops on period
+ \begin{equation*}
+ \scriptsize
+ \mbox{EC} = \frac{\sum\limits_{m=1}^{M} \left[ \left( E^{\mbox{com}}_m+E^{\mbox{list}}_m+E^{\mbox{comp}}_m \right) +\sum\limits_{t=1}^{T_m} \left( E^{a}_t+E^{s}_t \right) \right]}{\sum\limits_{m=1}^{M} T_m},
+ \end{equation*}
+
+
+% Old version with loop on round outside the loop on period
+% \begin{equation*}
+% \scriptsize
+% \mbox{EC} = \frac{\sum\limits_{m=1}^{M_L} \left( E^{\mbox{com}}_m+E^{\mbox{list}}_m+E^{\mbox{comp}}_m \right) +\sum\limits_{t=1}^{T_L} \left( E^{a}_t+E^{s}_t \right)}{T_L},
+% \end{equation*}
+
+% Ali version
%\begin{equation*}
%\scriptsize
%\mbox{EC} = \frac{\mbox{$\sum\limits_{d=1}^D E^c_d$}}{\mbox{$D$}} + \frac{\mbox{$\sum\limits_{d=1}^D %E^l_d$}}{\mbox{$D$}} + \frac{\mbox{$\sum\limits_{d=1}^D E^a_d$}}{\mbox{$D$}} + %\frac{\mbox{$\sum\limits_{d=1}^D E^s_d$}}{\mbox{$D$}}.
%\end{equation*}
-where $M_L$ and $T_L$ are respectively the number of periods and rounds during
-$Lifetime_{95}$ or $Lifetime_{50}$. The total energy consumed by the sensors
-(EC) comes through taking into consideration four main energy factors. The first
-one , denoted $E^{\scriptsize \mbox{com}}_m$, represent the energy consumption
-spent by all the nodes for wireless communications during period $m$.
-$E^{\scriptsize \mbox{list}}_m$, the next factor, corresponds to the energy
-consumed by the sensors in LISTENING status before receiving the decision to go
-active or sleep in period $m$. $E^{\scriptsize \mbox{comp}}_m$ refers to the
-energy needed by all the leader nodes to solve the integer program during a
-period. Finally, $E^a_t$ and $E^s_t$ indicate the energy consummed by the whole
-network in round $t$.
+% Old version -> where $M_L$ and $T_L$ are respectively the number of periods and rounds during
+%$Lifetime_{95}$ or $Lifetime_{50}$.
+% New version
+where $M$ is the number of periods and $T_m$ the number of rounds in a
+period~$m$, both during $Lifetime_{95}$ or $Lifetime_{50}$. The total energy
+consumed by the sensors (EC) comes through taking into consideration four main
+energy factors. The first one , denoted $E^{\scriptsize \mbox{com}}_m$,
+represents the energy consumption spent by all the nodes for wireless
+communications during period $m$. $E^{\scriptsize \mbox{list}}_m$, the next
+factor, corresponds to the energy consumed by the sensors in LISTENING status
+before receiving the decision to go active or sleep in period $m$.
+$E^{\scriptsize \mbox{comp}}_m$ refers to the energy needed by all the leader
+nodes to solve the integer program during a period. Finally, $E^a_t$ and $E^s_t$
+indicate the energy consumed by the whole network in round $t$.
%\item {Network Lifetime:} we have defined the network lifetime as the time until all
%nodes have been drained of their energy or each sensor network monitoring an area has become disconnected.
\end{enumerate}
+\subsection{Results and analysis}
-\section{Results and analysis}
-
-\subsection{Coverage ratio}
+\subsubsection{Coverage ratio}
Figure~\ref{fig3} shows the average coverage ratio for 150 deployed nodes. We
can notice that for the first thirty rounds both DESK and GAF provide a coverage
-which is a little bit better than the one of MuDiLCO-T.
-%%RC : need to uniformize MuDiLCO or MuDiLCO-T?
-
+which is a little bit better than the one of MuDiLCO.
+%%RC : need to uniformize MuDiLCO or MuDiLCO-T?
+%%MS : MuDiLCO everywhere
%%RC maybe increase the size of the figure for the reviewers, no?
+This is due to the fact that, in comparison with MuDiLCO which uses optimization
+to put in SLEEP status redundant sensors, more sensor nodes remain active with
+DESK and GAF. As a consequence, when the number of rounds increases, a larger
+number of node failures can be observed in DESK and GAF, resulting in a faster
+decrease of the coverage ratio. Furthermore, our protocol allows to maintain a
+coverage ratio greater than 50\% for far more rounds. Overall, the proposed
+sensor activity scheduling based on optimization in MuDiLCO maintains higher
+coverage ratios of the area of interest for a larger number of rounds. It also
+means that MuDiLCO saves more energy, with less dead nodes, at most for several
+rounds, and thus should extend the network lifetime.
-This is due to the fact
-that in comparison with MuDiLCO-T that uses optimization to put in SLEEP status
-redundant sensors, more sensor nodes remain active with DESK and GAF. As a
-consequence, when the number of rounds increases, a larger number of node
-failures can be observed in DESK and GAF, resulting in a faster decrease of the
-coverage ratio. Furthermore, our protocol allows to maintain a coverage ratio
-greater than 50\% for far more rounds. Overall, the proposed sensor activity
-scheduling based on optimization in MuDiLCO maintains higher coverage ratios of
-the area of interest for a larger number of rounds. It also means that MuDiLCO-T
-saves more energy, with less dead nodes, at most for several rounds, and thus
-should extend the network lifetime.
-
-\begin{figure}[t!]
+\begin{figure}[ht!]
\centering
- \includegraphics[scale=0.5] {R1/CR.pdf}
+ \includegraphics[scale=0.5] {R/CR.pdf}
\caption{Average coverage ratio for 150 deployed nodes}
\label{fig3}
\end{figure}
-\subsection{Active sensors ratio}
+\subsubsection{Active sensors ratio}
It is crucial to have as few active nodes as possible in each round, in order to
minimize the communication overhead and maximize the network
lifetime. Figure~\ref{fig4} presents the active sensor ratio for 150 deployed
nodes all along the network lifetime. It appears that up to round thirteen, DESK
and GAF have respectively 37.6\% and 44.8\% of nodes in ACTIVE status, whereas
-MuDiLCO-T clearly outperforms them with only 24.8\% of active nodes. After the
-thirty fifth round, MuDiLCO-T exhibits larger number of active nodes, which
-agrees with the dual observation of higher level of coverage made previously.
+MuDiLCO clearly outperforms them with only 24.8\% of active nodes. After the
+thirty-fifth round, MuDiLCO exhibits larger numbers of active nodes, which agrees
+with the dual observation of higher level of coverage made previously.
Obviously, in that case DESK and GAF have less active nodes, since they have
-activated many nodes at the beginning. Anyway, MuDiLCO-T activates the available
+activated many nodes at the beginning. Anyway, MuDiLCO activates the available
nodes in a more efficient manner.
-\begin{figure}[t!]
+\begin{figure}[ht!]
\centering
-\includegraphics[scale=0.5]{R1/ASR.pdf}
+\includegraphics[scale=0.5]{R/ASR.pdf}
\caption{Active sensors ratio for 150 deployed nodes}
\label{fig4}
\end{figure}
-\subsection{Stopped simulation runs}
+\subsubsection{Stopped simulation runs}
%The results presented in this experiment, is to show the comparison of our MuDiLCO protocol with other two approaches from the point of view the stopped simulation runs per round. Figure~\ref{fig6} illustrates the percentage of stopped simulation
%runs per round for 150 deployed nodes.
Figure~\ref{fig6} reports the cumulative percentage of stopped simulations runs
-per round for 150 deployed nodes. This figure gives the breakpoint for each of
-the methods. DESK stops first, after around 45~rounds, because it consumes the
+per round for 150 deployed nodes. This figure gives the breakpoint for each method. DESK stops first, after approximately 45~rounds, because it consumes the
more energy by turning on a large number of redundant nodes during the sensing
-phase. GAF stops secondly for the same reason than DESK. MuDiLCO-T overcomes
+phase. GAF stops secondly for the same reason than DESK. MuDiLCO overcomes
DESK and GAF because the optimization process distributed on several subregions
leads to coverage preservation and so extends the network lifetime. Let us
emphasize that the simulation continues as long as a network in a subregion is
%%% The optimization effectively continues as long as a network in a subregion is still connected. A VOIR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}[t!]
+\begin{figure}[ht!]
\centering
-\includegraphics[scale=0.5]{R1/SR.pdf}
+\includegraphics[scale=0.5]{R/SR.pdf}
\caption{Cumulative percentage of stopped simulation runs for 150 deployed nodes }
\label{fig6}
\end{figure}
-\subsection{Energy Consumption} \label{subsec:EC}
+\subsubsection{Energy consumption} \label{subsec:EC}
We measure the energy consumed by the sensors during the communication,
listening, computation, active, and sleep status for different network densities
\begin{figure}[h!]
\centering
\begin{tabular}{cl}
- \parbox{9.5cm}{\includegraphics[scale=0.5]{R1/EC95.pdf}} & (a) \\
+ \parbox{9.5cm}{\includegraphics[scale=0.5]{R/EC95.pdf}} & (a) \\
\verb+ + \\
- \parbox{9.5cm}{\includegraphics[scale=0.5]{R1/EC50.pdf}} & (b)
+ \parbox{9.5cm}{\includegraphics[scale=0.5]{R/EC50.pdf}} & (b)
\end{tabular}
\caption{Energy consumption for (a) $Lifetime_{95}$ and
(b) $Lifetime_{50}$}
\label{fig7}
\end{figure}
-The results show that MuDiLCO-T is the most competitive from the energy
+The results show that MuDiLCO is the most competitive from the energy
consumption point of view. The other approaches have a high energy consumption
due to activating a larger number of redundant nodes as well as the energy
consumed during the different status of the sensor node. Among the different
%In fact, a distributed optimization decision, which produces T rounds, on the subregions is greatly reduced the cost of communications and the time of listening as well as the energy needed for sensing phase and computation so thanks to the partitioning of the initial network into several independent subnetworks and producing T rounds for each subregion periodically.
-\subsection{Execution time}
+\subsubsection{Execution time}
We observe the impact of the network size and of the number of rounds on the
computation time. Figure~\ref{fig77} gives the average execution times in
-seconds (needed to solve optimization problem) for different values of $T$. The
+seconds (needed to solve optimization problem) for different values of $T$. The modeling language for Mathematical Programming (AMPL)~\cite{AMPL} is employed to generate the Mixed Integer Linear Program instance in a standard format, which is then read and solved by the optimization solver GLPK (GNU linear Programming Kit available in the public domain) \cite{glpk} through a Branch-and-Bound method. The
original execution time is computed on a laptop DELL with Intel Core~i3~2370~M
(2.4 GHz) processor (2 cores) and the MIPS (Million Instructions Per Second)
rate equal to 35330. To be consistent with the use of a sensor node with Atmels
\frac{35330}{2} \times \frac{1}{6} \right)$ and reported on Figure~\ref{fig77}
for different network sizes.
-\begin{figure}[t!]
+\begin{figure}[ht!]
\centering
-\includegraphics[scale=0.5]{R1/T.pdf}
+\includegraphics[scale=0.5]{R/T.pdf}
\caption{Execution Time (in seconds)}
\label{fig77}
\end{figure}
As expected, the execution time increases with the number of rounds $T$ taken
-into account for scheduling of the sensing phase. The times obtained for $T=1,3$
-or $5$ seems bearable, but for $T=7$ they become quickly unsuitable for a sensor
+into account to schedule the sensing phase. The times obtained for $T=1,3$
+or $5$ seem bearable, but for $T=7$ they become quickly unsuitable for a sensor
node, especially when the sensor network size increases. Again, we can notice
that if we want to schedule the nodes activities for a large number of rounds,
-we need to choose a relevant number of subregion in order to avoid a complicated
+we need to choose a relevant number of subregions in order to avoid a complicated
and cumbersome optimization. On the one hand, a large value for $T$ permits to
reduce the energy-overhead due to the three pre-sensing phases, on the other
hand a leader node may waste a considerable amount of energy to solve the
%While MuDiLCO-1, 3, and 5 solves the optimization process with suitable execution times to be used on wireless sensor network because it distributed on larger number of small subregions as well as it is used acceptable number of round(s) T. We think that in distributed fashion the solving of the optimization problem to produce T rounds in a subregion can be tackled by sensor nodes. Overall, to be able to deal with very large networks, a distributed method is clearly required.
-\subsection{Network Lifetime}
+\subsubsection{Network lifetime}
The next two figures, Figures~\ref{fig8}(a) and \ref{fig8}(b), illustrate the
network lifetime for different network sizes, respectively for $Lifetime_{95}$
and $Lifetime_{50}$. Both figures show that the network lifetime increases
together with the number of sensor nodes, whatever the protocol, thanks to the
-node density which result in more and more redundant nodes that can be
-deactivated and thus save energy. Compared to the other approaches, our
-MuDiLCO-T protocol maximizes the lifetime of the network. In particular the
-gain in lifetime for a coverage over 95\% is greater than 38\% when switching
-from GAF to MuDiLCO-3. The slight decrease that can bee observed for MuDiLCO-7
-in case of $Lifetime_{95}$ with large wireless sensor networks result from the
+node density which results in more and more redundant nodes that can be
+deactivated and thus save energy. Compared to the other approaches, our MuDiLCO
+protocol maximizes the lifetime of the network. In particular the gain in
+lifetime for a coverage over 95\% is greater than 38\% when switching from GAF
+to MuDiLCO-3. The slight decrease that can be observed for MuDiLCO-7 in case
+of $Lifetime_{95}$ with large wireless sensor networks results from the
difficulty of the optimization problem to be solved by the integer program.
This point was already noticed in subsection \ref{subsec:EC} devoted to the
energy consumption, since network lifetime and energy consumption are directly
\begin{figure}[t!]
\centering
\begin{tabular}{cl}
- \parbox{9.5cm}{\includegraphics[scale=0.5]{R1/LT95.pdf}} & (a) \\
+ \parbox{9.5cm}{\includegraphics[scale=0.5]{R/LT95.pdf}} & (a) \\
\verb+ + \\
- \parbox{9.5cm}{\includegraphics[scale=0.5]{R1/LT50.pdf}} & (b)
+ \parbox{9.5cm}{\includegraphics[scale=0.5]{R/LT50.pdf}} & (b)
\end{tabular}
\caption{Network lifetime for (a) $Lifetime_{95}$ and
(b) $Lifetime_{50}$}
\label{fig8}
\end{figure}
-% By choosing the best suited nodes, for each round, by optimizing the coverage and lifetime of the network to cover the area of interest with a maximum number rounds and by letting the other nodes sleep in order to be used later in next rounds, our MuDiLCO-T protocol efficiently prolonges the network lifetime.
+% By choosing the best suited nodes, for each round, by optimizing the coverage and lifetime of the network to cover the area of interest with a maximum number rounds and by letting the other nodes sleep in order to be used later in next rounds, our MuDiLCO protocol efficiently prolonges the network lifetime.
-%In Figure~\ref{fig8}, Comparison shows that our MuDiLCO-T protocol, which are used distributed optimization on the subregions with the ability of producing T rounds, is the best one because it is robust to network disconnection during the network lifetime as well as it consume less energy in comparison with other approaches. It also means that distributing the protocol in each sensor node and subdividing the sensing field into many subregions, which are managed independently and simultaneously, is the most relevant way to maximize the lifetime of a network.
+%In Figure~\ref{fig8}, Comparison shows that our MuDiLCO protocol, which are used distributed optimization on the subregions with the ability of producing T rounds, is the best one because it is robust to network disconnection during the network lifetime as well as it consume less energy in comparison with other approaches. It also means that distributing the protocol in each sensor node and subdividing the sensing field into many subregions, which are managed independently and simultaneously, is the most relevant way to maximize the lifetime of a network.
%We see that our MuDiLCO-7 protocol results in execution times that quickly become unsuitable for a sensor network as well as the energy consumption seems to be huge because it used a larger number of rounds T during performing the optimization decision in the subregions, which is led to decrease the network lifetime. On the other side, our MuDiLCO-1, 3, and 5 protocol seems to be more efficient in comparison with other approaches because they are prolonged the lifetime of the network more than DESK and GAF.
-\section{Conclusion and Future Works}
+\section{Conclusion and future works}
\label{sec:conclusion}
-In this paper, we have addressed the problem of the coverage and the lifetime
-optimization in wireless sensor networks. This is a key issue as sensor nodes
-have limited resources in terms of memory, energy, and computational power. To
-cope with this problem, the field of sensing is divided into smaller subregions
-using the concept of divide-and-conquer method, and then we propose a protocol
-which optimizes coverage and lifetime performances in each subregion. Our
-protocol, called MuDiLCO (Multiround Distributed Lifetime Coverage
-Optimization) combines two efficient techniques: network leader election and
-sensor activity scheduling.
+We have addressed the problem of the coverage and of the lifetime optimization in
+wireless sensor networks. This is a key issue as sensor nodes have limited
+resources in terms of memory, energy, and computational power. To cope with this
+problem, the field of sensing is divided into smaller subregions using the
+concept of divide-and-conquer method, and then we propose a protocol which
+optimizes coverage and lifetime performances in each subregion. Our protocol,
+called MuDiLCO (Multiround Distributed Lifetime Coverage Optimization) combines
+two efficient techniques: network leader election and sensor activity
+scheduling.
%, where the challenges
%include how to select the most efficient leader in each subregion and
%the best cover sets %of active nodes that will optimize the network lifetime
%subregion using more than one cover set during the sensing phase.
The activity scheduling in each subregion works in periods, where each period
consists of four phases: (i) Information Exchange, (ii) Leader Election, (iii)
-Decision Phase to plan the activity of the sensors over $T$ rounds (iv) Sensing
-Phase itself divided into T rounds.
+Decision Phase to plan the activity of the sensors over $T$ rounds, (iv) Sensing
+Phase itself divided into $T$ rounds.
Simulations results show the relevance of the proposed protocol in terms of
lifetime, coverage ratio, active sensors ratio, energy consumption, execution
time. Indeed, when dealing with large wireless sensor networks, a distributed
-approach like the one we propose allows to reduce the difficulty of a single
+approach, like the one we propose, allows to reduce the difficulty of a single
global optimization problem by partitioning it in many smaller problems, one per
subregion, that can be solved more easily. Nevertheless, results also show that
it is not possible to plan the activity of sensors over too many rounds, because
-the resulting optimization problem leads to too high resolution time and thus to
+the resulting optimization problem leads to too high resolution times and thus to
an excessive energy consumption.
%In future work, we plan to study and propose adjustable sensing range coverage optimization protocol, which computes all active sensor schedules in one time, by using