-%\item Area Coverage: The main objective is to cover an area. The area coverage requires
-%that the sensing range of working Active nodes cover the whole targeting area, which means any
-%point in target area can be covered~\cite{Mihaela02,Raymond03}.
-
-%\item Target Coverage: The objective is to cover a set of targets. Target coverage means that the discrete target points can be covered in any time. The sensing range of working Active nodes only monitors a finite number of discrete points in targeting area~\cite{Mihaela02,Raymond03}.
-
-%\item Barrier Coverage An objective to determine the maximal support/breach paths that traverse a sensor field. Barrier coverage is expressed as finding one or more routes with starting position and ending position when the targets pass through the area deployed with sensor nodes~\cite{Santosh04,Ai05}.
-%\end{itemize}
-{\bf Coverage}
-
-The most discussed coverage problems in literature can be classified
-into two types \cite{ma10}: area coverage (also called full or blanket
-coverage) and target coverage. An area coverage problem is to find a
-minimum number of sensors to work, such that each physical point in the
-area is within the sensing range of at least one working sensor node.
-Target coverage problem is to cover only a finite number of discrete
-points called targets. This type of coverage has mainly military
-applications. Our work will concentrate on the area coverage by design
-and implementation of a strategy which efficiently selects the active
-nodes that must maintain both sensing coverage and network
-connectivity and at the same time improve the lifetime of the wireless
-sensor network. But requiring that all physical points of the
-considered region are covered may be too strict, especially where the
-sensor network is not dense. Our approach represents an area covered
-by a sensor as a set of primary points and tries to maximize the total
-number of primary points that are covered in each round, while
-minimizing overcoverage (points covered by multiple active sensors
-simultaneously).
-
-{\bf Lifetime}
-
-Various definitions exist for the lifetime of a sensor
-network~\cite{die09}. The main definitions proposed in the literature are
-related to the remaining energy of the nodes or to the coverage percentage.
-The lifetime of the network is mainly defined as the amount
-of time during which the network can satisfy its coverage objective (the
-amount of time that the network can cover a given percentage of its
-area or targets of interest). In this work, we assume that the network
-is alive until all nodes have been drained of their energy or the
-sensor network becomes disconnected, and we measure the coverage ratio
-during the WSN lifetime. Network connectivity is important because an
-active sensor node without connectivity towards a base station cannot
-transmit information on an event in the area that it monitors.
-
-{\bf Activity scheduling}
-
-Activity scheduling is to schedule the activation and deactivation of
-sensor nodes. The basic objective is to decide which sensors are in
-what states (active or sleeping mode) and for how long, so that the
-application coverage requirement can be guaranteed and the network
-lifetime can be prolonged. Various approaches, including centralized,
-distributed, and localized algorithms, have been proposed for activity
-scheduling. In distributed algorithms, each node in the network
-autonomously makes decisions on whether to turn on or turn off itself
-only using local neighbor information. In centralized algorithms, a
-central controller (a node or base station) informs every sensors of
-the time intervals to be activated.
-
-{\bf Distributed approaches}
-
-Some distributed algorithms have been developed
-in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02} to perform the
-scheduling. Distributed algorithms typically operate in rounds for
-a predetermined duration. At the beginning of each round, a sensor
-exchanges information with its neighbors and makes a decision to either
-remain turned on or to go to sleep for the round. This decision is
-basically made on simple greedy criteria like the largest uncovered
-area \cite{Berman05efficientenergy}, maximum uncovered targets
-\cite{1240799}. In \cite{Tian02}, the scheduling scheme is divided
-into rounds, where each round has a self-scheduling phase followed by
-a sensing phase. Each sensor broadcasts a message containing the node ID
-and the node location to its neighbors at the beginning of each round. A
-sensor determines its status by a rule named off-duty eligible rule
-which tells him to turn off if its sensing area is covered by its
-neighbors. A back-off scheme is introduced to let each sensor delay
-the decision process with a random period of time, in order to avoid
-simultaneous conflicting decisions between nodes and lack of coverage on any area.
-\cite{Prasad:2007:DAL:1782174.1782218} defines a model for capturing
-the dependencies between different cover sets and proposes localized
-heuristic based on this dependency. The algorithm consists of two
-phases, an initial setup phase during which each sensor computes and
-prioritizes the covers and a sensing phase during which each sensor
-first decides its on/off status, and then remains on or off for the
-rest of the duration. Authors in \cite{chin2007} propose a novel
-distributed heuristic named Distributed Energy-efficient Scheduling
-for k-coverage (DESK) so that the energy consumption among all the
-sensors is balanced, and network lifetime is maximized while the
-coverage requirement is being maintained. This algorithm works in
-round, requires only 1-sensing-hop-neighbor information, and a sensor
-decides its status (active/sleep) based on its perimeter coverage
-computed through the k-Non-Unit-disk coverage algorithm proposed in
-\cite{Huang:2003:CPW:941350.941367}.
-
-Some other approaches do not consider a synchronized and predetermined
-period of time where the sensors are active or not. Indeed, each
-sensor maintains its own timer and its wake-up time is randomized
-\cite{Ye03} or regulated \cite{cardei05} over time.
-%A ecrire \cite{Abrams:2004:SKA:984622.984684}p33
-
-%The scheduling information is disseminated throughout the network and only sensors in the active state are responsible
-%for monitoring all targets, while all other nodes are in a low-energy sleep mode. The nodes decide cooperatively which of them will remain in sleep mode for a certain
-%period of time.
-
- %one way of increasing lifeteime is by turning off redundant nodes to sleep mode to conserve energy while active nodes provide essential coverage, which improves fault tolerance.
-
-%In this paper we focus on centralized algorithms because distributed algorithms are outside the scope of our work. Note that centralized coverage algorithms have the advantage of requiring very low processing power from the sensor nodes which have usually limited processing capabilities. Moreover, a recent study conducted in \cite{pc10} concludes that there is a threshold in terms of network size to switch from a localized to a centralized algorithm. Indeed the exchange of messages in large networks may consume a considerable amount of energy in a localized approach compared to a centralized one.
-
-{\bf Centralized approaches}
-
-Power efficient centralized schemes differ according to several
-criteria \cite{Cardei:2006:ECP:1646656.1646898}, such as the coverage
-objective (target coverage or area coverage), the node deployment
-method (random or deterministic) and the heterogeneity of sensor nodes
-(common sensing range, common battery lifetime). The major approach is
-to divide/organize the sensors into a suitable number of set covers
-where each set completely covers an interest region and to activate
-these set covers successively.
-
-The first algorithms proposed in the literature consider that the cover
-sets are disjoint: a sensor node appears in exactly one of the
-generated cover sets. For instance, Slijepcevic and Potkonjak
-\cite{Slijepcevic01powerefficient} propose an algorithm which
-allocates sensor nodes in mutually independent sets to monitor an area
-divided into several fields. Their algorithm builds a cover set by
-including in priority the sensor nodes which cover critical fields,
-that is to say fields that are covered by the smallest number of
-sensors. The time complexity of their heuristic is $O(n^2)$ where $n$
-is the number of sensors. \cite{cardei02}~describes a graph coloring
-technique to achieve energy savings by organizing the sensor nodes
-into a maximum number of disjoint dominating sets which are activated
-successively. The dominating sets do not guarantee the coverage of the
-whole region of interest. Abrams et
-al.~\cite{Abrams:2004:SKA:984622.984684} design three approximation
-algorithms for a variation of the set k-cover problem, where the
-objective is to partition the sensors into covers such that the number
-of covers that includes an area, summed over all areas, is maximized.
-Their work builds upon previous work
-in~\cite{Slijepcevic01powerefficient} and the generated cover sets do
-not provide complete coverage of the monitoring zone.
-
-%examine the target coverage problem by disjoint cover sets but relax the requirement that every cover set monitor all the targets and try to maximize the number of times the targets are covered by the partition. They propose various algorithms and establish approximation ratio.
-
-In~\cite{Cardei:2005:IWS:1160086.1160098}, the authors propose a
-heuristic to compute the disjoint set covers (DSC). In order to
-compute the maximum number of covers, they first transform DSC into a
-maximum-flow problem, which is then formulated as a mixed integer
-programming problem (MIP). Based on the solution of the MIP, they
-design a heuristic to compute the final number of covers. The results
-show a slight performance improvement in terms of the number of
-produced DSC in comparison to~\cite{Slijepcevic01powerefficient}, but
-it incurs higher execution time due to the complexity of the mixed
-integer programming solving. %Cardei and Du
-\cite{Cardei:2005:IWS:1160086.1160098} propose a method to efficiently
-compute the maximum number of disjoint set covers such that each set
-can monitor all targets. They first transform the problem into a
-maximum flow problem which is formulated as a mixed integer
-programming (MIP). Then their heuristic uses the output of the MIP to
-compute disjoint set covers. Results show that this heuristic
-provides a number of set covers slightly larger compared to
-\cite{Slijepcevic01powerefficient} but with a larger execution time
-due to the complexity of the mixed integer programming resolution.
-Zorbas et al. \cite{Zorbas2007} present B\{GOP\}, a centralized
-coverage algorithm introducing sensor candidate categorization
-depending on their coverage status and the notion of critical target
-to call targets that are associated with a small number of
-sensors. The total running time of their heuristic is $0(m n^2)$ where
-$n$ is the number of sensors, and $m$ the number of targets. Compared
-to algorithm's results of Slijepcevic and Potkonjak
-\cite{Slijepcevic01powerefficient}, their heuristic produces more
-cover sets with a slight growth rate in execution time.
-%More recently Manju and Pujari\cite{Manju2011}
-
-In the case of non-disjoint algorithms \cite{Manju2011}, sensors may
-participate in more than one cover set. In some cases this may
-prolong the lifetime of the network in comparison to the disjoint
-cover set algorithms, but designing algorithms for non-disjoint cover
-sets generally induces a higher order of complexity. Moreover, in
-case of a sensor's failure, non-disjoint scheduling policies are less
-resilient and less reliable because a sensor may be involved in more
-than one cover sets. For instance, Cardei et al.~\cite{cardei05bis}
-present a linear programming (LP) solution and a greedy approach to
-extend the sensor network lifetime by organizing the sensors into a
-maximal number of non-disjoint cover sets. Simulation results show
-that by allowing sensors to participate in multiple sets, the network
-lifetime increases compared with related
-work~\cite{Cardei:2005:IWS:1160086.1160098}. In~\cite{berman04}, the
-authors have formulated the lifetime problem and suggested another
-(LP) technique to solve this problem. A centralized solution based on the Garg-K\"{o}nemann
-algorithm~\cite{garg98}, provably near
-the optimal solution, is also proposed.
-
-{\bf Our contribution}
-
-There are three main questions which should be addressed to build a
-scheduling strategy. We give a brief answer to these three questions
-to describe our approach before going into details in the subsequent
-sections.
-\begin{itemize}
-\item {\bf How must the phases for information exchange, decision and
- sensing be planned over time?} Our algorithm divides the time line
- into a number of rounds. Each round contains 4 phases: Information
- Exchange, Leader Election, Decision, and Sensing.