-\iffalse
-
-\subsection{Centralized Approaches}
-%{\bf Centralized approaches}
-The major approach is to divide/organize the sensors into a suitable number of
-set covers where each set completely covers an interest region and to activate
-these set covers successively. The centralized algorithms always provide nearly
-or close to optimal solution since the algorithm has global view of the whole
-network. Note that centralized algorithms have the advantage of requiring very
-low processing power from the sensor nodes, which usually have limited
-processing capabilities. The main drawback of this kind of approach is its
-higher cost in communications, since the node that will take the decision needs
-information from all the sensor nodes. Moreover, centralized approaches usually
-suffer from the scalability problem, making them less competitive as the network
-size increases.
-
-The first algorithms proposed in the literature consider that the cover sets are
-disjoint: a sensor node appears in exactly one of the generated cover sets. For
-instance, Slijepcevic and Potkonjak \cite{Slijepcevic01powerefficient} have
-proposed an algorithm, which allocates sensor nodes in mutually independent sets
-to monitor an area divided into several fields. Their algorithm builds a cover
-set by including in priority the sensor nodes which cover critical fields, that
-is to say fields that are covered by the smallest number of sensors. The time
-complexity of their heuristic is $O(n^2)$ where $n$ is the number of sensors.
-Abrams et al.~\cite{abrams2004set} have designed three approximation algorithms
-for a variation of the set k-cover problem, where the objective is to partition
-the sensors into covers such that the number of covers that include an area,
-summed over all areas, is maximized. Their work builds upon previous work
-in~\cite{Slijepcevic01powerefficient} and the generated cover sets do not
-provide complete coverage of the monitoring zone.
-
-In \cite{cardei2005improving}, the authors have proposed a method to efficiently
-compute the maximum number of disjoint set covers such that each set can monitor
-all targets. They first transform the problem into a maximum flow problem, which
-is formulated as a mixed integer programming (MIP). Then their heuristic uses
-the output of the MIP to compute disjoint set covers. Results show that this
-heuristic provides a number of set covers slightly larger compared to
-\cite{Slijepcevic01powerefficient}, but with a larger execution time due to the
-complexity of the mixed integer programming resolution.
-
-Zorbas et al. \cite{zorbas2010solving} presented a centralized greedy algorithm
-for the efficient production of both node disjoint and non-disjoint cover sets.
-Compared to algorithm's results of Slijepcevic and Potkonjak
-\cite{Slijepcevic01powerefficient}, their heuristic produces more disjoint cover
-sets with a slight growth rate in execution time. When producing non-disjoint
-cover sets, both Static-CCF and Dynamic-CCF algorithms, where CCF means that
-they use a cost function called Critical Control Factor, provide cover sets
-offering longer network lifetime than those produced by \cite{cardei2005energy}.
-Also, they require a smaller number of participating nodes in order to achieve
-these results.
-
-In the case of non-disjoint algorithms \cite{pujari2011high}, sensors may
-participate in more than one cover set. In some cases, this may prolong the
-lifetime of the network in comparison to the disjoint cover set algorithms, but
-designing algorithms for non-disjoint cover sets generally induces a higher
-order of complexity. Moreover, in case of a sensor's failure, non-disjoint
-scheduling policies are less resilient and less reliable because a sensor may be
-involved in more than one cover sets. For instance, Cardei et
-al.~\cite{cardei2005energy} present a linear programming (LP) solution and a
-greedy approach to extend the sensor network lifetime by organizing the sensors
-into a maximal number of non-disjoint cover sets. Simulation results show that
-by allowing sensors to participate in multiple sets, the network lifetime
-increases compared with related work~\cite{cardei2005improving}.
-In~\cite{berman04}, the authors have formulated the lifetime problem and
-suggested another (LP) technique to solve this problem. A centralized solution
-based on the Garg-K\"{o}nemann algorithm~\cite{garg98}, provably near the
-optimal solution, is also proposed.
-
-In~\cite{yang2014maximum}, the authors have proposed a linear programming
-approach for selecting the minimum number of working sensor nodes, in order to
-as to preserve a maximum coverage and extend lifetime of the network. Cheng et
-al.~\cite{cheng2014energy} have defined a heuristic algorithm called Cover Sets
-Balance (CSB), which choose a set of active nodes using the tuple (data coverage
-range, residual energy). Then, they have introduced a new Correlated Node Set
-Computing (CNSC) algorithm to find the correlated node set for a given node.
-After that, they proposed a High Residual Energy First (HREF) node selection
-algorithm to minimize the number of active nodes so as to prolong the network
-lifetime. Various centralized methods based on column generation approaches have
-also been proposed~\cite{castano2013column,rossi2012exact,deschinkel2012column}.
-
-\subsection{Distributed approaches}
-%{\bf Distributed approaches}
-In distributed and localized coverage algorithms, the required computation to
-schedule the activity of sensor nodes will be done by the cooperation among
-neighboring nodes. These algorithms may require more computation power for the
-processing by the cooperating sensor nodes, but they are more scalable for large
-WSNs. Localized and distributed algorithms generally result in non-disjoint set
-covers.
-
-Many distributed algorithms have been developed to perform the scheduling so as
-to preserve coverage, see for example
-\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02,yardibi2010distributed}.
-Distributed algorithms typically operate in rounds for a predetermined
-duration. At the beginning of each round, a sensor exchanges information with
-its neighbors and makes a decision to either remain turned on or to go to sleep
-for the round. This decision is basically made on simple greedy criteria like
-the largest uncovered area \cite{Berman05efficientenergy} or maximum uncovered
-targets \cite{lu2003coverage}. In \cite{Tian02}, the scheduling scheme is
-divided into rounds, where each round has a self-scheduling phase followed by a
-sensing phase. Each sensor broadcasts a message containing the node~ID and the
-node location to its neighbors at the beginning of each round. A sensor
-determines its status by a rule named off-duty eligible rule, which tells him to
-turn off if its sensing area is covered by its neighbors. A back-off scheme is
-introduced to let each sensor delay the decision process with a random period of
-time, in order to avoid simultaneous conflicting decisions between nodes and
-lack of coverage on any area. In \cite{prasad2007distributed} a model for
-capturing the dependencies between different cover sets is defined and it
-proposes localized heuristic based on this dependency. The algorithm consists of
-two phases, an initial setup phase during which each sensor computes and
-prioritizes the covers and a sensing phase during which each sensor first
-decides its on/off status, and then remains on or off for the rest of the
-duration.
-
-The authors in \cite{yardibi2010distributed} have developed a Distributed
-Adaptive Sleep Scheduling Algorithm (DASSA) for WSNs with partial coverage.
-DASSA does not require location information of sensors while maintaining
-connectivity and satisfying a user defined coverage target. In DASSA, nodes use
-the residual energy levels and feedback from the sink for scheduling the
-activity of their neighbors. This feedback mechanism reduces the randomness in
-scheduling that would otherwise occur due to the absence of location
-information. In \cite{ChinhVu}, the author have proposed a novel distributed
-heuristic, called Distributed Energy-efficient Scheduling for k-coverage (DESK),
-which ensures that the energy consumption among the sensors is balanced and the
-lifetime maximized while the coverage requirement is maintained. This heuristic
-works in rounds, requires only one-hop neighbor information, and each sensor
-decides its status (active or sleep) based on the perimeter coverage model
-proposed in \cite{Huang:2003:CPW:941350.941367}.
-
-%Our Work, which is presented in~\cite{idrees2014coverage} proposed a coverage optimization protocol to improve the lifetime in
-%heterogeneous energy wireless sensor networks.
-%In this work, the coverage protocol distributed in each sensor node in the subregion but the optimization take place over the the whole subregion. We consider only distributing the coverage protocol over two subregions.
-
-The works presented in \cite{Bang, Zhixin, Zhang} focus on coverage-aware,
-distributed energy-efficient, and distributed clustering methods respectively,
-which aim to extend the network lifetime, while the coverage is ensured. S.
-Misra et al. \cite{Misra} have proposed a localized algorithm for coverage in
-sensor networks. The algorithm conserve the energy while ensuring the network
-coverage by activating the subset of sensors with the minimum overlap area. The
-proposed method preserves the network connectivity by formation of the network
-backbone. More recently, Shibo et al. \cite{Shibo} have expressed the coverage
-problem as a minimum weight submodular set cover problem and proposed a
-Distributed Truncated Greedy Algorithm (DTGA) to solve it. They take advantage
-from both temporal and spatial correlations between data sensed by different
-sensors, and leverage prediction, to improve the lifetime. In
-\cite{xu2001geography}, Xu et al. have proposed an algorithm, called
-Geographical Adaptive Fidelity (GAF), which uses geographic location information
-to divide the area of interest into fixed square grids. Within each grid, it
-keeps only one node staying awake to take the responsibility of sensing and
-communication.
-
-Some other approaches (outside the scope of our work) do not consider a
-synchronized and predetermined period of time where the sensors are active or
-not. Indeed, each sensor maintains its own timer and its wake-up time is
-randomized \cite{Ye03} or regulated \cite{cardei2005maximum} over time.
-
-The MuDiLCO protocol (for Multiround Distributed Lifetime Coverage Optimization
-protocol) presented in this paper is an extension of the approach introduced
-in~\cite{idrees2014coverage}. In~\cite{idrees2014coverage}, the protocol is
-deployed over only two subregions. Simulation results have shown that it was
-more interesting to divide the area into several subregions, given the
-computation complexity. Compared to our previous paper, in this one we study the
-possibility of dividing the sensing phase into multiple rounds and we also add
-an improved model of energy consumption to assess the efficiency of our
-approach.
-
-
-
-
-\fi
-%The main contributions of our MuDiLCO Protocol can be summarized as follows:
-%(1) The high coverage ratio, (2) The reduced number of active nodes, (3) The distributed optimization over the subregions in the area of interest, (4) The distributed dynamic leader election at each round based on some priority factors that led to energy consumption balancing among the nodes in the same subregion, (5) The primary point coverage model to represent each sensor node in the network, (6) The activity scheduling based optimization on the subregion, which are based on the primary point coverage model to activate as less number as possible of sensor nodes for a multirounds to take the mission of the coverage in each subregion, (7) The very low energy consumption, (8) The higher network lifetime.
-%\section{Preliminaries}
-%\label{Pr}
-
-%Network Lifetime
-
-%\subsection{Network Lifetime}
-%Various definitions exist for the lifetime of a sensor
-%network~\cite{die09}. The main definitions proposed in the literature are
-%related to the remaining energy of the nodes or to the coverage percentage.
-%The lifetime of the network is mainly defined as the amount
-%of time during which the network can satisfy its coverage objective (the
-%amount of time that the network can cover a given percentage of its
-%area or targets of interest). In this work, we assume that the network
-%is alive until all nodes have been drained of their energy or the
-%sensor network becomes disconnected, and we measure the coverage ratio
-%during the WSN lifetime. Network connectivity is important because an
-%active sensor node without connectivity towards a base station cannot
-%transmit information on an event in the area that it monitors.