+
+\subsection{Centralized approaches}
+
+The major approach is to divide/organize the sensors into a suitable number of
+cover sets where each set completely covers an interest region and to activate
+these cover sets successively. The centralized algorithms always provide nearly
+or close to optimal solution since the algorithm has global view of the whole
+network. Note that centralized algorithms have the advantage of requiring very
+low processing power from the sensor nodes, which usually have limited
+processing capabilities. The main drawback of this kind of approach is its
+higher cost in communications, since the node that will make the decision needs
+information from all the sensor nodes. \textcolor{green} {Exact or heuristics approaches are designed to provide cover sets.
+ %(Moreover, centralized approaches usually
+%suffer from the scalability problem, making them less competitive as the network
+%size increases.)
+Contrary to exact methods, heuristic methods can handle very large and centralized problems. They are proposed to reduce computational overhead such as energy consumption, delay and generally increase in
+the network lifetime. }
+
+The first algorithms proposed in the literature consider that the cover sets are
+disjoint: a sensor node appears in exactly one of the generated cover
+sets~\cite{abrams2004set,cardei2005improving,Slijepcevic01powerefficient}. In
+the case of non-disjoint algorithms \cite{pujari2011high}, sensors may
+participate in more than one cover set. In some cases, this may prolong the
+lifetime of the network in comparison to the disjoint cover set algorithms, but
+designing algorithms for non-disjoint cover sets generally induces a higher
+order of complexity. Moreover, in case of a sensor's failure, non-disjoint
+scheduling policies are less resilient and reliable because a sensor may be
+involved in more than one cover sets.
+%For instance, the proposed work in ~\cite{cardei2005energy, berman04}
+
+In~\cite{yang2014maximum}, the authors have considered a linear programming
+approach to select the minimum number of working sensor nodes, in order to
+preserve a maximum coverage and to extend lifetime of the network. Cheng et
+al.~\cite{cheng2014energy} have defined a heuristic algorithm called Cover Sets
+Balance (CSB), which chooses a set of active nodes using the tuple (data
+coverage range, residual energy). Then, they have introduced a new Correlated
+Node Set Computing (CNSC) algorithm to find the correlated node set for a given
+node. After that, they proposed a High Residual Energy First (HREF) node
+selection algorithm to minimize the number of active nodes so as to prolong the
+network lifetime. Various centralized methods based on column generation
+approaches have also been
+proposed~\cite{gentili2013,castano2013column,rossi2012exact,deschinkel2012column}.
+\textcolor{green}{In~\cite{gentili2013}, authors highlight the trade-off between the network lifetime and the coverage percentage. They show that network lifetime can be hugely improved by decreasing the coverage ratio. }
+
+\subsection{Distributed approaches}
+%{\bf Distributed approaches}
+In distributed and localized coverage algorithms, the required computation to
+schedule the activity of sensor nodes will be done by the cooperation among
+neighboring nodes. These algorithms may require more computation power for the
+processing by the cooperating sensor nodes, but they are more scalable for large
+WSNs. Localized and distributed algorithms generally result in non-disjoint set
+covers.
+
+Many distributed algorithms have been developed to perform the scheduling so as
+to preserve coverage, see for example
+\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02, yardibi2010distributed,
+ prasad2007distributed,Misra}. Distributed algorithms typically operate in
+rounds for a predetermined duration. At the beginning of each round, a sensor
+exchanges information with its neighbors and makes a decision to either remain
+turned on or to go to sleep for the round. This decision is basically made on
+simple greedy criteria like the largest uncovered area
+\cite{Berman05efficientenergy} or maximum uncovered targets
+\cite{lu2003coverage}. The Distributed Adaptive Sleep Scheduling Algorithm
+(DASSA) \cite{yardibi2010distributed} does not require location information of
+sensors while maintaining connectivity and satisfying a user defined coverage
+target. In DASSA, nodes use the residual energy levels and feedback from the
+sink for scheduling the activity of their neighbors. This feedback mechanism
+reduces the randomness in scheduling that would otherwise occur due to the
+absence of location information. In \cite{ChinhVu}, the author have designed a
+novel distributed heuristic, called Distributed Energy-efficient Scheduling for
+k-coverage (DESK), which ensures that the energy consumption among the sensors
+is balanced and the lifetime maximized while the coverage requirement is
+maintained. This heuristic works in rounds, requires only one-hop neighbor
+information, and each sensor decides its status (active or sleep) based on the
+perimeter coverage model from~\cite{Huang:2003:CPW:941350.941367}.
+
+%Our Work, which is presented in~\cite{idrees2014coverage} proposed a coverage optimization protocol to improve the lifetime in
+%heterogeneous energy wireless sensor networks.
+%In this work, the coverage protocol distributed in each sensor node in the subregion but the optimization take place over the the whole subregion. We consider only distributing the coverage protocol over two subregions.
+
+The works presented in \cite{Bang, Zhixin, Zhang} focus on coverage-aware,
+distributed energy-efficient, and distributed clustering methods respectively,
+which aim at extending the network lifetime, while the coverage is ensured.
+More recently, Shibo et al. \cite{Shibo} have expressed the coverage problem as
+a minimum weight submodular set cover problem and proposed a Distributed
+Truncated Greedy Algorithm (DTGA) to solve it. They take advantage from both
+temporal and spatial correlations between data sensed by different sensors, and
+leverage prediction, to improve the lifetime. In \cite{xu2001geography}, Xu et
+al. have described an algorithm, called Geographical Adaptive Fidelity (GAF),
+which uses geographic location information to divide the area of interest into
+fixed square grids. Within each grid, it keeps only one node staying awake to
+take the responsibility of sensing and communication.
+
+Some other approaches (outside the scope of our work) do not consider a
+synchronized and predetermined time-slot where the sensors are active or not.
+Indeed, each sensor maintains its own timer and its wake-up time is randomized
+\cite{Ye03} or regulated \cite{cardei2005maximum} over time.
+
+The MuDiLCO protocol (for Multiround Distributed Lifetime Coverage Optimization
+protocol) presented in this paper is an extension of the approach introduced
+in~\cite{idrees2014coverage}. In~\cite{idrees2014coverage}, the protocol is
+deployed over only two subregions. Simulation results have shown that it was
+more interesting to divide the area into several subregions, given the
+computation complexity. Compared to our previous paper, in this one we study the
+possibility of dividing the sensing phase into multiple rounds and we also add
+an improved model of energy consumption to assess the efficiency of our
+approach. In fact, in this paper we make a multiround optimization, while it was
+a single round optimization in our previous work. \textcolor{green}{The idea is to take advantage of the pre-sensing phase
+ to plan the sensor's activity for several rounds instead of one, thus saving energy. In addition, when the optimization problem becomes more complex, its resolution is stopped after a given time threshold}.
+
+\iffalse