X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/ThesisAli.git/blobdiff_plain/28fe5f530e0e9ae044a4463fd7eb3646cc5dd04c..0046bea715d2cca04631faad903ba3d2af54de4b:/CHAPITRE_02.tex?ds=inline diff --git a/CHAPITRE_02.tex b/CHAPITRE_02.tex index 0c823df..43a8692 100644 --- a/CHAPITRE_02.tex +++ b/CHAPITRE_02.tex @@ -9,12 +9,13 @@ \section{Introduction} \label{ch2:sec:01} -The main objective of deploying a large number of wireless sensor nodes in the target area of interest is to construct a WSN, which is responsible for monitoring and observation the sensing field, and detecting the required important event in the area of interest. The coverage problem represents the principle requirement in these applications. The main question that shared by these applications is how can the deployed wireless sensor nodes monitor the physical phenomenon properly. The coverage can be considered as one of the QoS (Quality of Service) parameters, and it is closely related to energy consumption. It represents the sensing task supplied by the wireless sensors in WSNs. +The main objective of deploying a large number of wireless sensor nodes in the target area of interest is to construct a WSN, which is responsible for monitoring the sensing field. The coverage problem represents the principle requirement in these applications. The main question shared by these applications is how can the deployed wireless sensor nodes monitor the physical phenomenon properly. The coverage can be considered as one of the QoS (Quality of Service) parameters, and it is closely related to energy consumption. It represents the sensing task supplied by the wireless sensors in WSNs. -The energy resource limitation of wireless sensor nodes has been considered as a big challenge in order to operate the WSN with less energy consumption whilst fulfill the coverage requirement. The main objective of scattering the wireless sensor nodes over the area of interest is to collect the sensed data of the physical phenomena for processing or reporting, where there are two types of reporting for sensed data in WSNs~\cite{ref138} like event-driven and on-demand. In the latter, the monitoring base station start the reporting operation by transmitting a request to the wireless sensor nodes so as to send their sensed data to the base station; for example, the inventory tracking application. In the former, the reporting operation is triggered by one or more wireless sensor nodes within the physical phenomena by transmitting their sensed data to the controlling base station; for instance, the forest fire detection application. The hybrid scheme of the two types is more flexible. +The energy resource limitation of wireless sensor nodes has been considered as a big challenge. So, it is desired to operate the WSN with less energy consumption whilst fulfilling the coverage requirement. The main objective of scattering the wireless sensor nodes over the area of interest is to collect the sensed data of the physical phenomena for processing or reporting, where there are two types of reporting for sensed data in WSNs~\cite{ref138}: event-driven and on-demand. In the latter, the monitoring base station starts the reporting operation by transmitting a request to the wireless sensor nodes so as to send their sensed data to the base station; like in inventory tracking application. In the former, the reporting operation is triggered by one or more wireless sensor nodes within the physical phenomena by transmitting their sensed data to the controlling base station; for instance, the forest fire detection application. +%In hybrid scheme of the two types is more flexible. -The ultimate goal of the coverage is to ensure that each point in the sensing field is within the sensing range of at least one sensor node. Some applications require high reliability to perform their tasks, so they need that every point in the sensing field is covered by more than one sensor node. In order to avoid the lack in monitoring the area of interest, it is necessary that the WSN are deployed with high density so as to exploit the overlapping among the sensor nodes and to prevent malfunction of sensor nodes in severe environments. The overlap can be exploited by choosing the minimum number of sensor nodes to perform the main tasks of the WSN in the sensing field and putting the rest sensor nodes in very low power sleep mode so as to prolong the network lifetime. This exploitation manner is called sensor activity scheduling that aims to set the activity state of each sensor node in the WSN so that the sensing field can be monitored for a long time as possible. The required level of coverage should be guaranteed by the activity-based scheduling scheme~\cite{ref139}. Many scheduling algorithms have been described in~\cite{ref58,ref57}. +The ultimate goal of the coverage is to ensure that each point in the sensing field is within the sensing range of at least one sensor node. Some applications require high reliability to perform their tasks, so they need that each point in the sensing field is covered by more than one sensor node. In order to avoid a lack of monitoring in the area of interest, it is necessary that WSNs are deployed with high density so as to exploit the overlapping among the sensor nodes and to prevent malfunction of sensor nodes in severe environments. The overlap can be exploited by choosing the minimum number of sensor nodes to perform the main tasks of the WSN in the sensing field and putting the remaining sensor nodes in very low power sleep mode so as to prolong the network lifetime. This exploitation manner, which is called sensor activity scheduling, aims to set the activity state of each sensor node in the WSN so that the sensing field can be monitored for as long as possible. The required level of coverage should be guaranteed by the activity-based scheduling scheme~\cite{ref139}. Many scheduling algorithms have been described in~\cite{ref58,ref57}. %This dissertation focuses on the problem of covering the area of interest as long as possible. Several proposed approaches to extend the network lifetime whilst maintaining the coverage have been viewed in this chapter. M. Cardei and J. Wu~\cite{ref113} have been surveyed the different coverage formulation models and their assumptions, as well as the solutions provided. In~\cite{ref105}, several coverage problems are presented from different angles, where the models and assumptions, as well as proposed solutions in the literatures, are described. In this dissertation, the main contribution of previous works that deal with the coverage problem have been addressed. We end this chapter by focusing on two algorithms, GAF~\cite{GAF} and DESK~\cite{DESK}, since they have been used for comparison against our coverage protocols. @@ -22,12 +23,8 @@ The ultimate goal of the coverage is to ensure that each point in the sensing fi %\section{Coverage Algorithms} %\label{ch2:sec:02} -\indent This chapter is dedicated to the various approaches proposed in the -literature for the coverage lifetime maximization problem, where the objective -is to optimally schedule sensors' activities in order to extend network lifetime -in WSNs. -In~\cite{ref105}, several coverage problems are presented from different angles, where the models and assumptions, as well as proposed solutions in the literatures, are described. -M. Cardei and J. Wu~\cite{ref113} survey the different coverage formulation models and their assumptions, as well as the solutions provided. They provide a taxonomy for coverage algorithms in WSNs according to several design choices: +\indent This chapter is dedicated to the various approaches proposed in the literature for the coverage lifetime maximization problem, where the objective +is to optimally schedule sensors' activities in order to extend network lifetime in WSNs. In~\cite{ref105}, several coverage problems are presented from different points of view, where the models and assumptions, as well as proposed solutions in the literatures, are described. M. Cardei and J. Wu~\cite{ref113} survey the different coverage formulation models and their assumptions, as well as the solutions provided. They provide a taxonomy for coverage algorithms in WSNs according to several design choices: \begin{enumerate} [(i)] \item Sensors scheduling algorithm implementation, i.e. centralized or distributed/localized algorithms. @@ -38,19 +35,28 @@ M. Cardei and J. Wu~\cite{ref113} survey the different coverage formulation mode \item Additional requirements for energy-efficient and connected coverage. \end{enumerate} -From our point of view, the choice of non-disjoint or disjoint cover sets (sensors participate or not in many cover sets), coverage type ( area, target, or barrier), coverage ratio, coverage degree (how many sensors are required to cover a target or an area) can be added to the above list. +From our point of view, the choice of non-disjoint or disjoint cover sets (sensors participate or not in many cover sets), coverage type (area, target, or barrier), coverage ratio, coverage degree (how many sensors are required to cover a target or an area) can be added to the above list. -Once a sensor nodes are deployed, a coverage algorithm is run to schedule the sensor nodes into cover sets so as to maintain sufficient coverage in the area of interest and extend the network lifetime. The WSN applications require either complete or partial to area coverage and for target coverage, all the target should be covered. This chapter concentrates only on area coverage and target coverage problems because it is possible to transform the area coverage problem to target ( or point) coverage problem and vice versa. We have excluded the barrier coverage problem from this discussion about the coverage problems because it is outside the scope of this dissertation. -This dissertation focuses mainly on the area coverage problem, where the ultimate goal of the area coverage problem is to choose the minimum number of sensor nodes to cover the whole sensing field. +Once sensor nodes are deployed, a coverage algorithm is run to schedule the sensor nodes into cover sets so as to maintain sufficient coverage in the area of interest and extend the network lifetime. +%The WSN applications require either complete or partial area coverage, while for target coverage, all the target should be covered. +This chapter concentrates only on area coverage and target coverage problems because it is possible to transform the area coverage problem to target (or point) coverage problem and vice versa. We have excluded the barrier coverage problem from this discussion because it is outside the scope of this dissertation. +This dissertation mainly focuses on the area coverage problem, where the ultimate goal is to choose the minimum number of sensor nodes to cover the whole sensing field. %We have focused mainly on the area coverage problem. Therefore, we represent the sensing area of each sensor node in the sensing field as a set of primary points and then achieving full area coverage by covering all the points in the sensing field. The ultimate goal of the area coverage problem is to choose the minimum number of sensor nodes to cover the whole sensing region and prolonging the lifetime of the WSN. -Many centralized and distributed coverage algorithms for activity scheduling have been proposed in the literature and based on different assumptions and objectives. In centralized algorithms, a central controller makes all decisions and distributes the results to sensor nodes. The centralized algorithms have the advantage of requiring very low processing power from the sensor nodes which have usually limited processing capabilities. On the contrary, the exchange of packets in large WSNs may consume a considerable amount of energy in a centralized approach compared to a distributed one. Moreover, centralized approaches usually suffer from the scalability and reliability problems, making them less competitive as the network size increases. +Many centralized and distributed coverage algorithms for activity scheduling have been proposed in the literature, based on different assumptions and objectives. In centralized algorithms, a central controller (base station) makes all decisions and distributes the results to sensor nodes. The centralized algorithms have the advantage of requiring very low processing power from the sensor nodes (except for the base station) which have usually limited processing capabilities. On the contrary, the exchange of packets in large WSNs may consume a considerable amount of energy in a centralized approach compared to a distributed one. The exchange of packets is between the sensor nodes and the base station. Centralized algorithms provide solutions close to optimal solutions. They provide less redundant active sensor nodes during monitoring the sensing field. But, centralized approaches usually suffer from the scalability and reliability problems, making them less competitive when the network size increases. -In a distributed algorithms, on the other hand, the decision process is localized in each individual sensor node, and the only information from neighboring nodes are used for the activity decision. Compared to centralized algorithms, distributed algorithms reduce the energy consumption required for radio communication and detection accuracy whilst increase the energy consumption for computation. Overall, distributed algorithms are more suitable for large-scale networks, but it can not give optimal (or near-optimal) solution based only on local information. Moreover, a recent study conducted in \cite{ref226} concludes that there is a threshold in terms of network size to switch from a distributed to a centralized algorithm. Table~\ref{Table0:ch2} shows a comparison between the centralized coverage algorithms and the distributed coverage algorithms. +In distributed algorithms, on the other hand, the decision process is localized in each individual sensor node, and only informations from neighboring nodes are used for the activity decision. Overall, distributed algorithms are more suitable for large-scale networks, but it can not give an optimal (or near-optimal) solution based only on local informations. They provide more redundant active sensor nodes during monitoring the sensing field. The exchange of packets is between the sensor nodes and their neighbors. Distributed algorithms are more robust against sensor failure. Moreover, a recent study conducted in \cite{ref226} concludes that there is a threshold in terms of network size to switch from a distributed to a centralized algorithm. +%%Table~\ref{Table0:ch2} shows a comparison between centralized coverage algorithms and distributed coverage algorithms. -\begin{table}[h] + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% + +\iffalse + +\begin{table}[h!] \caption{Centralized Coverage Algorithms vs Distributed Coverage Algorithms} \begin{center} \begin{tabular}{ |p{3cm}|p{5cm}|p{5cm}|} @@ -58,7 +64,7 @@ In a distributed algorithms, on the other hand, the decision process is localize \textbf{\begin{center} Characteristics \end{center}} & \textbf{\begin{center} Centralized Coverage Algorithms \end{center}} & \textbf{\begin{center} Distributed Coverage Algorithms \end{center}}\\ \hline -\textbf{\begin{center} Computation \end{center}} & Require low processing power where the algorithm is executed only in one elected node. & Require large processing power due to execution the algorithm in every node in WSN. \\ \hline +\textbf{\begin{center} Computation \end{center}} & Require low processing power where the algorithm is executed only in one node. & Require large processing power due to executing the algorithm in every node in WSN. \\ \hline \textbf{\begin{center} Communication \end{center}} & Sensor nodes communicate directly with the base station, therefore, they require low-power consumption for communication. & Sensor nodes require high power consumption for communication because of the frequent exchange of hello packets. \\ \hline @@ -68,7 +74,7 @@ In a distributed algorithms, on the other hand, the decision process is localize \textbf{\begin{center} Energy Consumption \end{center}} & Energy consumption is large especially when the network size and/or density increase. & Energy consumption is low because they have lower communication cost. \\ \hline -\textbf{\begin{center} Scalability \end{center}} & Scalable only with dividing the sensing field into smaller subregions. & More scalable for large networks. \\ \hline +\textbf{\begin{center} Scalability \end{center}} & Not scalable, but they can overcome this problem by dividing the sensing field into smaller subregions. & More scalable for large networks. \\ \hline \textbf{\begin{center} Reliability \end{center}} & Less robust against sensor failure. & More robust against sensor failure. \\ \hline @@ -77,44 +83,46 @@ In a distributed algorithms, on the other hand, the decision process is localize \label{Table0:ch2} \end{table} +\fi -In this dissertation, the sensing field is divided into smaller subregions using divide-and-conquer method. The division continues until the distance between each two sensors inside the subregion is 3 or 2 hops maximum. This division makes our protocols more scalable for large networks, less energy consumer for communication, less processing power for decision, and more reliable against network failure. Our proposed protocols are distributed on the sensor nodes of the subregions. The protocols in each subregion work in independent and simultaneous way with the protocols in the other subregions. If the network is disconnected in one subregion, it will not affect the other subregions of the sensing field. There is no a fixed sensor node in the subregion for executing the optimization algorithm. Each period of the network lifetime, the sensor nodes in the subregion cooperate in order to select a sensor node to execute the optimization algorithm according to a predefined priority metrics. The resulted local optimal schedule of optimization algorithm is applied within the subregion. The elected sensor node sends a packet to every sensor within the subregion to inform him to stay active or sleep during this period. Each optimization algorithm in a subregion provides locally optimal solution, so the solution for all the sensing field is near-optimal. +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -Several algorithms to retain the coverage and maximize the network lifetime were proposed in~\cite{ref113,ref101,ref103,ref105}. Table~\ref{Table1:ch2} summarizes the main characteristics of some coverage approaches in previous literatures. In table~\ref{Table1:ch2}, the "SET K-COVER" characteristic refers to the maximum number of disjoint or non-disjoint sets of sensors such that each set cover can assure the coverage for the whole region. The K-COVER algorithm provides a solution with K cover sets in each execution. The k-coverage characteristic refers to that every point inside the monitored area is always covered by at least k active sensors. +In this dissertation, the sensing field is divided into smaller subregions using a divide-and-conquer method. The division continues until the distance between two sensors inside the subregion is 3 or 2 hops maximum. This division makes our protocols more scalable for large networks, less energy consumer for communication and processing, and more reliable against network failure. Our proposed protocols are distributed on the sensor nodes of the subregions. The protocols in each subregion work in independent and simultaneous way with the protocols in the other subregions. If the network is disconnected in one subregion, it will not affect the other subregions of the sensing field. There is no fixed sensor node in the subregion for executing the optimization algorithm. Each period of the network lifetime, the sensor nodes in the subregion cooperate in order to select a sensor node to execute the optimization algorithm according to predefined priority metrics. The resulting local optimal schedule from optimization algorithm is applied within the subregion. The elected sensor node sends a packet to every sensor within the subregion to inform him to stay active or sleep during this period. Each optimization algorithm in a subregion provides locally an optimal solution, so the solution for the whole sensing field is near-optimal. + +Several algorithms to maintain the coverage and maximize the network lifetime were proposed in~\cite{ref113,ref101,ref103,ref105}. +Table \ref{x11} summarizes the main characteristics of some coverage approaches in previous literatures. +In this table, the "SET K-COVER" characteristic refers to the maximum number of disjoint or non-disjoint sets of sensors such that each set cover can ensure the coverage for the whole region. The K-COVER algorithm provides a solution with K cover sets in each execution. The k-coverage characteristic refers to the fact that every point inside the monitored area is always covered by at least k active sensors. -\section{Centralized Algorithms} -\label{ch2:sec:02} -The major idea of most centralized algorithms is to divide/organize the sensors into a suitable number of cover sets, where each set completely covers an interest region and to activate these cover sets successively. The centralized algorithms always provide optimal or near-optimal solution since the algorithm has a global view of the whole network. Energy-efficient centralized approaches differ according to several criteria \cite{ref113}, such as the coverage objective (target coverage or area coverage), the node deployment method (random or deterministic), and the heterogeneity of sensor nodes (common sensing range, common battery lifetime). -The first algorithms proposed in the literature consider that the cover sets are disjoint: a sensor node appears in exactly one of the generated cover sets~\cite{ref114,ref115,ref116}. For instance, Slijepcevic and Potkonjak \cite{ref116} propose an algorithm, which allocates sensor nodes in mutually independent sets to monitor an area divided into several fields. Their algorithm builds a cover set by including in priority the sensor nodes, which cover critical fields, that is to say, fields that are covered by the smallest number of sensors. The time complexity of their heuristic is $O(n^2)$ where $n$ is the number of sensors. M. Cardei et al.~\cite{ref227}, suggest a graph coloring technique to achieve energy savings by organizing the sensor nodes into a maximum number of disjoint dominating sets, which are activated successively. They have defined the maximum disjoint dominating sets problem and they have produced a heuristic that computes the disjoint cover sets so as to monitor the area of interest. The dominating sets do not guarantee the coverage of the whole region of interest. Abrams et al.~\cite{ref114} design three approximation algorithms for a variation of the set k-cover problem, where the objective is to partition the sensors into covers such that the number of covers that include an area, summed over all areas, is maximized. -Their work builds upon previous work in~\cite{ref116} and the generated cover sets do not provide complete coverage of the monitoring zone. -The authors in~\cite{ref115} propose a heuristic to compute the Disjoint Set Covers (DSC). In order to compute the maximum number of covers, they first transform DSC into a maximum-flow problem, which is then formulated as a Mixed Integer Programming problem (MIP). Based on the solution of the MIP, they design a heuristic to compute the final number of covers. The results show a slight performance improvement in terms of the number of produced DSC in comparison to~\cite{ref116}, but it incurs higher execution time due to the complexity of the mixed integer programming solving. Zorbas et al. \cite{ref228} present B\{GOP\}, a centralized target coverage algorithm introducing sensor candidate categorization depending on their coverage status and the notion of critical target to call targets that are associated with a small number of sensors. The total running time of their heuristic is $0(m n^2)$ where -$n$ is the number of sensors and $m$ the number of targets. Compared to algorithm's results of Slijepcevic and Potkonjak \cite{ref116}, their heuristic produces more cover sets with a slight growth rate in execution time. L. Liu et al.~\cite{ref150} formulate the maximum disjoint sets for maintaining target coverage and connectivity problem in WSN. They propose a graph theoretical framework to study the joint problem of connectivity and coverage in a WSN with random deployment of nodes with no restrictions on the sensing and communication ranges of nodes. They propose heuristic algorithms to find the suitable number of nodes to guarantee connectivity and coverage while maximizing network lifetime. -%This work did not take into account the sensor node failure, which is an unpredictable event because the two solutions are full centralized algorithms. -Y. Li et al.~\cite{ref142} present a framework with heuristic strategies to solve the area coverage problem. The framework converts any complete coverage problem to a partial coverage one with any coverage ratio. They execute a complete coverage algorithm to find full coverage sets with virtual radii and then transform to the coverage sets to a partial coverage sets by adjusting sensing radii . This framework has four strategies, two of them are designed for the network where the sensors have fixed sensing range and the other two are for the network where the sensors have adjustable sensing range. The properties of the algorithms can be maintained by this framework and the transformation process has a low execution time. The simulation results validate the efficiency of the four proposed strategies. More recently, Deschinkel and Hakem \cite{ref229} introduce a near-optimal heuristic algorithm for solving the target coverage problem in WSN. The sensor nodes are organized into disjoint cover sets, each capable of monitoring all the targets of the region of interest. %Those covers sets are scheduled periodically. -Their algorithm is able to construct the different cover sets in parallel. The results show that their algorithm achieves near-optimal solutions compared to the optimal ones obtained by the resolution of an integer programming. -%exact method. +\section{Centralized Algorithms} +\label{ch2:sec:02} +The major idea of most centralized algorithms is to divide/organize the sensors into a suitable number of cover sets and to activate these cover sets successively. The centralized algorithms always provide optimal or near-optimal solution since the algorithm has a global view of the whole network. Energy-efficient centralized approaches differ according to several criteria \cite{ref113}, such as the coverage objective (target coverage or area coverage), the node deployment method (random or deterministic), and the heterogeneity of sensor nodes (common sensing range, common battery lifetime). +The first algorithms proposed in the literature consider that the cover sets are disjoint: a sensor node appears in exactly one of the generated cover sets~\cite{ref114,ref115,ref116,ref227}. For instance, Slijepcevic and Potkonjak \cite{ref116} propose an algorithm, which allocates sensor nodes in mutually independent sets to monitor an area divided into several fields. Their algorithm builds a cover set by including in priority the sensor nodes which cover critical fields, that is to say, fields that are covered by the smallest number of sensors. The time complexity of their heuristic is $O(n^2)$ where $n$ is the number of sensors. +%%%M. Cardei et al.~\cite{ref227}, suggest a graph coloring technique to achieve energy savings by organizing the sensor nodes into a maximum number of disjoint dominating sets, which are activated successively. They have defined the maximum disjoint dominating sets problem and they have produced a heuristic that computes the disjoint cover sets so as to monitor the area of interest. The dominating sets do not guarantee the coverage of the whole region of interest. Abrams et al.~\cite{ref114} design three approximation algorithms for a variation of the set k-cover problem, where the objective is to partition the sensors into covers so that the number of covers that include an area, summed over all areas, is maximized. Their work is built upon previous work in~\cite{ref116} and the generated cover sets do not provide complete coverage of the monitoring zone. +%%%The authors in~\cite{ref115} propose a heuristic to compute the Disjoint Set Covers (DSC). In order to compute the maximum number of covers, they first transform DSC into a maximum-flow problem, which is then formulated as a Mixed Integer Programming problem (MIP). Based on the solution of the MIP, they design a heuristic to compute the final number of covers. The results show a slight performance improvement in terms of the number of produced DSC in comparison to~\cite{ref116}, but it incurs higher execution time due to the complexity of the mixed integer programming solving. Zorbas et al. \cite{ref228} present B\{GOP\}, a centralized target coverage algorithm introducing sensor candidate categorization depending on their coverage status and the notion of critical target to call targets that are associated with a small number of sensors. The total running time of their heuristic is $0(m n^2)$, where $n$ is the number of sensors and $m$ the number of targets. Compared to algorithm's results of Slijepcevic and Potkonjak \cite{ref116}, their heuristic produces more cover sets with a slight growth rate in execution time. +L. Liu et al.~\cite{ref150} formulate the maximum disjoint sets for maintaining target coverage and connectivity problem in WSN. They propose a graph theoretical framework to study the joint problem of connectivity and coverage in a WSN with random deployment of nodes with no restrictions on the sensing and communication ranges of nodes. They propose heuristic algorithms to find the suitable number of nodes to guarantee connectivity and coverage while maximizing network lifetime. +Y. Li et al.~\cite{ref142} present a framework with heuristic strategies to solve the area coverage problem. The framework converts any complete coverage problem into a partial coverage one with any coverage ratio. They execute a complete coverage algorithm to find full coverage sets with virtual radii and then transform the coverage sets to a partial coverage sets by adjusting sensing radii. This framework has four strategies, two of them are designed for network, where the sensors have fixed sensing range and the other two are for network, where the sensors have adjustable sensing range. The properties of the algorithms can be maintained by this framework and the transformation process has a low execution time. The simulation results validate the efficiency of the four proposed strategies. More recently, Deschinkel and Hakem \cite{ref229} introduce a near-optimal heuristic algorithm for solving the target coverage problem in WSN. The sensor nodes are organized into disjoint cover sets, each one is capable of monitoring all the targets of the region of interest. Their algorithm is able to construct the different cover sets in parallel. The results show that their algorithm achieves near-optimal solutions compared to the optimal ones obtained by the resolution of an integer programming problem. +%exact method. -In the case of non-disjoint algorithms~\cite{ref117}, sensors may participate in more than one cover set. In some cases, this may prolong the lifetime of the network in comparison to the disjoint cover set algorithms, but designing algorithms for non-disjoint cover sets generally induces a higher order of complexity. Moreover, in case of a sensor's failure, non-disjoint scheduling policies are less resilient and reliable because a sensor may be involved in more than one cover sets. For instance, Cardei et al.~\cite{ref167} -present a Linear Programming (LP) solution and a greedy approach to -extend the sensor network lifetime by organizing the sensors into a -maximal number of non-disjoint cover sets. Simulation results show -that by allowing sensors to participate in multiple sets, the network -lifetime increases compared with related work~\cite{ref115}. The authors in~\cite{ref148}, address the problem of minimum cost area coverage in which full coverage is performed by using the minimum number of sensors for an arbitrary geometric shape region. A geometric solution to the minimum cost coverage problem under a deterministic deployment is proposed. The probabilistic coverage solution which provides a relationship between the probability of coverage and the number of randomly deployed sensors in an arbitrarily-shaped region is suggested. -%The authors explained that with a random deployment about seven times more nodes are required to supply full coverage compared to deterministic deployment. -The work in~\cite{ref144} address the area coverage problem by proposing a Geometrically based Activity Scheduling scheme, named GAS, to fully cover the area of interest in WSNs. The authors deal with a small area, called target area coverage, which can be monitored by a single sensor instead of area coverage, which focuses on a large area that should be monitored by many sensors cooperatively. They explain that GAS is capable to monitor the target area by using a few sensors as possible and it can produce as many cover sets as possible. A novel area coverage method to divide the sensors called Node Coverage Grouping (NCG) is suggested~\cite{ref147}. The sensors in the connectivity group are within sensing range of each other, and the data collected by them in the same group are supposed to be similar. They prove that dividing N sensors via NCG into connectivity groups is an NP-hard problem. So, a heuristic algorithm of NCG with time complexity of $O(n^3)$ is proposed. -For some applications, such as monitoring an ecosystem with extremely diversified environment, it might be premature assumption that sensors near to each other sense similar data. The problem of k-coverage over the area of interest in WSNs is addressed in~\cite{ref152}. It is mathematically formulated and the spatial sensor density for full k-coverage is determined. The relation between the communication range and the sensing range is constructed by this work to retain the k-coverage and connectivity in WSN. After that, four configuration protocols are proposed for treating the k-coverage in WSNs. Simulation results show that their protocols outperform an existing distributed k-coverage configuration protocol. The work presented in~\cite{ref151} solves the area coverage and connectivity problem in sensor networks in an integrated way. The network lifetime is divided into a fixed number of rounds. A coverage bitmap of sensors of the domain is generated in each round and based on this bitmap, it is decided which sensors stay active or go to sleep. They check the connection of the graph via laplacian of the adjacency graph of active sensors in each round. %The generation of coverage bitmap by using Minkowski technique, the network is able to providing the desired ratio of coverage. -They define the connected coverage problem as an optimization problem and a centralized genetic algorithm is used to find the solution. -Recent studies show an increasing interest in the use of exact schemes to solve optimization problems in WSN \cite{ref230,ref231,ref121,ref122,ref120}. Column Generation (CG) has been widely used to address different versions of Maximum-network Lifetime Problem (MLP). CG decomposes the problem into a Restricted Master Problem (RMP) and a Pricing Subproblem (PS). The former maximizes lifetime using an incomplete set of columns, and the latter is used to identify new profitable columns. A. Rossi et al.~\cite{ref121} introduce an efficient implementation of a genetic algorithm based on CG to extend the lifetime and maximize target coverage in wireless sensor networks under bandwidth constraints. The authors show that the use of metaheuristic methods to solve PS in the context of CG allows to obtain optimal solutions quite fast and to produce high-quality solutions when the algorithm is stopped before returning an optimal solution. More recently, F. Castano et al. \cite{ref120} address the maximum network lifetime and the target coverage problem in WSNs with connectivity and coverage constraints. They consider two cases to schedule the activity of a set of sensor nodes, keeping them connected while network lifetime is maximized. First, the full coverage of the targets is required, and second only a fraction of the targets has to be monitored at any instant of time. They propose an exact approach based on column generation and boosted by a Greedy Randomized Adaptive Search Procedure (GRASP) and a Variable Neighborhood Search (VNS) to address both of these problems. Finally, a multiphase framework combining these two approaches is constructed sequentially using these two heuristics at each iteration of the column generation algorithm. The results show that combining the two heuristic methods enhances the results significantly. +In the case of non-disjoint algorithms~\cite{ref117,ref167,ref144,ref147,ref118}, sensors may participate in more than one cover set. In some cases, this may prolong the lifetime of the network in comparison to the disjoint cover set algorithms, but designing algorithms for non-disjoint cover sets generally induces a higher order of complexity. Moreover, in case of a sensor's failure, non-disjoint scheduling policies are less resilient and reliable because a sensor may be involved in more than one cover sets. For instance, +%%%Cardei et al.~\cite{ref167} present a Linear Programming (LP) solution and a greedy approach to extend the sensor network lifetime by organizing the sensors into a maximal number of non-disjoint cover sets. Simulation results show that by allowing sensors to participate in multiple sets, the network lifetime increases compared with related work~\cite{ref115}. +the authors in~\cite{ref148}, address the problem of minimum cost area coverage in which full coverage is performed by using the minimum number of sensors for an arbitrary geometric shape region. A geometric solution to the minimum cost coverage problem under a deterministic deployment is proposed. The probabilistic coverage solution which provides a relationship between the probability of coverage and the number of randomly deployed sensors in an arbitrarily-shaped region is suggested. +%%%The work in~\cite{ref144} addresses the area coverage problem by proposing a Geometrically based Activity Scheduling scheme, named GAS, to fully cover the area of interest in WSNs. The authors deal with a small area, called target area coverage, which can be monitored by a single sensor instead of area coverage, which focuses on a large area that should be monitored by many sensors cooperatively. They explain that GAS is capable to monitor the target area by using the fewest number of sensors and it can produce as many cover sets as possible. A novel area coverage method to divide the sensors called Node Coverage Grouping (NCG) is suggested~\cite{ref147}. The sensors in the connectivity group are within sensing range of each other and the data collected by those in the same group are supposed to be similar. They prove that dividing N sensors via NCG into connectivity groups is an NP-hard problem. So, a heuristic algorithm of NCG with time complexity of $O(n^3)$ is proposed. For some applications, such as monitoring an ecosystem with extremely diversified environment, it might be a premature assumption that sensors near to each other sense similar data. +The problem of k-coverage over the area of interest in WSNs is addressed in~\cite{ref152}. It is mathematically formulated and the spatial sensor density for full k-coverage is determined. The relation between the communication range and the sensing range is constructed by this work to retain the k-coverage and connectivity in WSN. After that, four configuration protocols are proposed for treating the k-coverage in WSNs. Simulation results show that their protocols outperform an existing distributed k-coverage configuration protocol. The work presented in~\cite{ref151} solves the area coverage and connectivity problem in sensor networks in an integrated way. The network lifetime is divided into a fixed number of rounds. A coverage bitmap of sensors of the domain is generated in each round and based on this bitmap, it is decided which sensors stay active or go to sleep. They check the connection of the graph via laplacian of the adjacency graph of active sensors in each round. They define the connected coverage problem as an optimization problem and a centralized genetic algorithm is used to find the solution. Recent studies show an increasing interest in the use of exact schemes to solve optimization problems in WSNs \cite{ref230,ref231,ref121,ref122,ref120}. Column Generation (CG) has been widely used to address different versions of Maximum-network Lifetime Problem (MLP). CG decomposes the problem into a Restricted Master Problem (RMP) and a Pricing Subproblem (PS). The former maximizes lifetime using an incomplete set of columns and the latter is used to identify new profitable columns. +%%%A. Rossi et al.~\cite{ref121} introduce an efficient implementation of a genetic algorithm based on CG to extend the lifetime and maximize target coverage in wireless sensor networks under bandwidth constraints. The authors show that the use of metaheuristic methods to solve PS in the context of CG allows to obtain optimal solutions quite fast and to produce high-quality solutions when the algorithm is stopped before returning an optimal solution. More recently, + F. Castano et al. \cite{ref120} address the maximum network lifetime and the target coverage problem in WSNs with connectivity and coverage constraints. They consider two cases to schedule the activity of a set of sensor nodes, keeping them connected while network lifetime is maximized. First, the full coverage of the targets is required, second only a fraction of the targets has to be monitored at any instant of time. They propose an exact approach based on column generation boosted by a Greedy Randomized Adaptive Search Procedure (GRASP) and a Variable Neighborhood Search (VNS) to address both of these problems. Finally, a multiphase framework combining these two approaches is constructed sequentially using these two heuristics at each iteration of the column generation algorithm. The results show that combining the two heuristic methods enhances the results significantly. -More recently, the authors in~\cite{ref118}, consider an area coverage optimization algorithm based on linear programming approach to select the minimum number of working sensor nodes, in order to preserve a maximum coverage and to extend the lifetime of the network. The experimental results show that linear programming can provide a fewest number of active nodes and maximize the network lifetime coverage. M. Rebai et al.~\cite{ref141}, formulate the problem of full grid area coverage problem using two integer linear programming models: the first, a model that takes into account only the overall coverage constraint; the second, both the connectivity and the full grid coverage constraints are taken into consideration. This work does not take into account the energy constraint. H. Cheng et al.~\cite{ref119} define a heuristic area coverage algorithm called Cover Sets Balance (CSB), which chooses a set of active nodes using the tuple (data coverage range, residual energy). Then, they introduce a new Correlated Node Set Computing (CNSC) algorithm to find the correlated node set for a given node. After that, they propose a High Residual Energy First (HREF) node selection algorithm to minimize the number of active nodes so as to prolong the network lifetime. X. Liu et al.~\cite{ref143} explain that in some applications of WSNs such as Structural Health Monitoring (SHM) and volcano monitoring, the traditional coverage model which is a geographic area defined for individual sensors is not always valid. For this reason, they define a generalized area coverage model, which is not required to have the coverage area of individual nodes, but only based on a function to determine whether a set of sensor nodes is capable of satisfy the requested monitoring task for a certain area. They propose two approaches for dividing the deployed nodes into suitable cover sets. +More recently, +%%%the authors in~\cite{ref118} consider an area coverage optimization algorithm based on linear programming approach to select the minimum number of working sensor nodes, in order to preserve a maximum coverage and to extend the lifetime of the network. The experimental results show that linear programming can provide a fewest number of active nodes and maximize the network lifetime coverage. +M. Rebai et al.~\cite{ref141}, formulate the problem of full grid area coverage problem using two integer linear programming models: the first, is a model that takes into account only the overall coverage constraint; the second, both the connectivity and the full grid coverage constraints are taken into consideration. This work does not consider the energy constraint. H. Cheng et al.~\cite{ref119} define a heuristic area coverage algorithm called Cover Sets Balance (CSB), which chooses a set of active nodes using the tuple (data coverage range, residual energy). Then, they introduce a new Correlated Node Set Computing (CNSC) algorithm to find the correlated node set for a given node. After that, they propose a High Residual Energy First (HREF) node selection algorithm to minimize the number of active nodes. X. Liu et al.~\cite{ref143} explain that in some applications of WSNs such as Structural Health Monitoring (SHM) and volcano monitoring, the traditional coverage model which is a geographic area defined for individual sensors is not always valid. For this reason, they define a generalized area coverage model, which is not required to have the coverage area of individual nodes, but only based on a function deciding whether a set of sensor nodes is capable of satisfy the requested monitoring task for a certain area. They propose two approaches for dividing the deployed nodes into suitable cover sets. @@ -127,25 +135,27 @@ More recently, the authors in~\cite{ref118}, consider an area coverage optimizat %In distributed and localized coverage algorithms, the required computation to schedule the activity of sensor nodes will be done by the cooperation among neighboring nodes. These algorithms may require more computation power for the processing by the cooperating sensor nodes, but they are more scalable for large WSNs. -Many distributed algorithms have been developed to perform the scheduling so as to preserve coverage, see for example \cite{ref123,ref124,ref125,ref126,ref109,ref127,ref128,ref97}. Localized and distributed algorithms generally result in non-disjoint set covers. +Many distributed algorithms have been developed to perform the scheduling to preserve coverage, see for example \cite{ref123,ref124,ref125,ref126,ref109,ref127,ref128,ref97}. Localized and distributed algorithms generally result in non-disjoint set covers. -X. Deng et al. \cite{ref133} formulate the area coverage problem as a decision problem, whose goal is to determine whether every point in the area of interest is monitored by at least k sensors. The authors prove that if the perimeters of sensors are sufficiently covered it will be the case for the whole area. They provide an algorithm in $O(nd~log~d)$ time to compute the perimeter coverage of each sensor, where $d$ denotes the maximum number of sensors that are neighboring to a sensor and $n$ is the total number of sensors in the network. +X. Deng et al. \cite{ref133} formulate the area coverage problem as a decision problem, whose goal is to determine if all points in the area of interest are monitored by at least k sensors. The authors prove that if the perimeters of sensors are sufficiently covered it will be the case for the whole area. They provide an algorithm in $O(nd~log~d)$ time to compute the perimeter coverage of each sensor, where $d$ denotes the maximum number of sensors that are neighbors of a sensor and $n$ is the total number of sensors in the network. %Their solutions can be translated to distributed protocols to solve the coverage problem. Distributed algorithms typically operate in rounds for a predetermined duration. At the beginning of each round, a sensor exchanges information with its neighbors and makes a decision to either remain turned on or to go to sleep for the round. This decision is basically made on simple greedy criteria like the largest uncovered area \cite{ref130} or maximum uncovered targets \cite{ref131}. -Cho et al.~\cite{ref145} propose a distributed node scheduling protocol, which can retain sensing coverage needed by applications and increase network lifetime via putting in sleep mode some redundant nodes. In this work, the Effective Sensing Area (ESA) concept of a sensor node is used, which refers to the sensing area that is not overlapping with another sensor's sensing area. A sensor node can determine whether it will be active or turned off by computing its ESA. The suggested work permits sensor nodes to be in sleep mode opportunistically whilst fulfill the needed sensing coverage. The authors in~\cite{ref146}, define a Maximum Sensing Coverage Region problem (MSCR) in WSNs and then propose a distributed algorithm to solve it. The maximum observed area is fully covered by a minimum active sensors. In this work, the major property is to get rid of the redundant sensors in high-density WSNs and putting them in sleep mode, and choosing a smaller number of active sensors so as to ensure the full area is k-covered, and all events appearing in that area can be precisely and timely detected. This algorithm minimizes the total energy consumption and increases the network lifetime. The Distributed Adaptive Sleep Scheduling Algorithm (DASSA) \cite{ref127} does not require location information of sensors while maintaining connectivity and satisfying a user-defined coverage target. In DASSA, nodes use the residual energy levels and feedback from the sink for scheduling the activity of their neighbors. This feedback mechanism reduces the randomness in scheduling that would otherwise occur due to the absence of location information. -A graph theoretical framework for connectivity-based area coverage with configurable coverage granularity is proposed~\cite{ref149}. A new coverage criterion and scheduling approach is proposed based on cycle partition. This method is capable of build a sparse coverage set in distributed way by means of only connectivity information. This work considers only that the communication range of the sensor is smaller two times the sensing range of sensor. Shibo et al.~\cite{ref137} express the area coverage problem as a minimum weight submodular set cover problem and propose a Distributed Truncated Greedy Algorithm (DTGA) to solve it. They take advantage from both temporal and spatial correlations between data sensed by different sensors, and leverage prediction, to improve the lifetime. The authors in \cite{ref160}, design an energy-efficient approach to area coverage problems by transforming the area coverage problem to the target coverage problem taking into account the intersection points among disks of sensors nodes or between disk of sensor nodes and boundaries. They propose two mechanisms for the converted target coverage problems to produce cover sets covering the sensing +Cho et al.~\cite{ref145} propose a distributed node scheduling protocol, which can retain sensing coverage needed by applications and increases network lifetime via putting in sleep mode some redundant nodes. In this work, the Effective Sensing Area (ESA) concept of a sensor node is used, which refers to the sensing area that is not overlapping with another sensor's sensing area. A sensor node can determine whether it will be active or turned off by computing its ESA. The suggested work permits sensor nodes to be in sleep mode opportunistically whilst fulfilling the needed sensing coverage. +J. M. Bahi et al. \cite{ref236,ref237} propose a distributed approach which consists of two steps: nodes localization and coverage scheduling. They suggest a mobile beacon to divide the area of interest into unit squares using Hilbert space filling curve method. They exploit the localization phase to construct sets of active nodes. They provide a local activity scheduling approach for the sensor nodes to ensure the area coverage and to prolong the network lifetime. The experiment results show an improvement in the network lifetime using their proposed distributed approach. +The authors in~\cite{ref146} define a Maximum Sensing Coverage Region problem (MSCR) in WSNs and then propose a distributed algorithm to solve it. The maximum observed area is fully covered by a minimum active sensors. In this work, the major property is to get rid of the redundant sensors in high-density WSNs and putting them in sleep mode. In addition, a smaller number of active sensors is chosen so as to ensure the full area is k-covered, and all events appearing in that area can be precisely and timely detected. This algorithm minimizes the total energy consumption and increases the network lifetime. The Distributed Adaptive Sleep Scheduling Algorithm (DASSA) \cite{ref127} does not require location information of sensors while maintaining connectivity and satisfying a user-defined coverage target. In DASSA, nodes use residual energy levels and feedback from the sink for scheduling the activity of their neighbors. This feedback mechanism reduces the randomness in scheduling that would otherwise occur due to the absence of location information. +A graph theoretical framework for connectivity-based area coverage with configurable coverage granularity is proposed~\cite{ref149}. A new coverage criterion and scheduling approach is proposed based on cycle partition. This method is able to build a sparse coverage set in distributed way by means of only connectivity information. This work only considers that the communication range of the sensor is two times smaller than the sensing one. Shibo et al.~\cite{ref137} express the area coverage problem as a minimum weight submodular set cover problem and propose a Distributed Truncated Greedy Algorithm (DTGA) to solve it. They take advantage from both temporal and spatial correlations between data sensed by different sensors, and leverage prediction, to improve the lifetime. The authors in \cite{ref160} design an energy-efficient approach to area coverage problems by transforming the area coverage problem to the target coverage problem taking into account the intersection points among disks of sensors nodes or between disc of sensor nodes and boundaries. They propose two mechanisms for the converted target coverage problems to produce cover sets covering the sensing field completely. Simulations results show that this approach can prolong the lifetime of the network compared with other works. The works presented in~\cite{ref134,ref135,ref136} focus on coverage-aware, distributed energy-efficient, and distributed clustering methods respectively, which aim at extending the network lifetime, while the coverage is ensured. -In this dissertation, we focus in more details on two distributed coverage algorithms, GAF and DESK because we compared our proposed coverage optimization protocols with them during performance evaluation. +In this dissertation, we focus in more details on two distributed coverage algorithms: GAF and DESK, because we compared our proposed coverage optimization protocols with them during performance evaluation. GAF algorithm is chosen for comparison as a competitor because it is famous and easy to implement, as well as many authors referred to it in many publications. DESK algorithm is also selected as competitor in the comparison because it works into rounds fashion (network lifetime divided into rounds) similar to our approaches, as well as DESK is a full distributed coverage approach. -\subsection{GAF} +\subsection{Geographical Adaptive Fidelity (GAF)} \label{ch2:sec:03:1} -Xu et al. \cite{GAF} develop an algorithm, called Geographical Adaptive Fidelity (GAF). It uses geographic location information to divide the area of interest into fixed square grids. Within each grid, it keeps only one node staying awake to take the responsibility of sensing and communication. Each sensor node uses its GPS to associate itself with a point in the grid.Figure~\ref{gaf1} gives an example of fixed square grid in GAF. +GAF is developed by Xu et al. \cite{GAF}, it uses geographic location information to divide the area of interest into a fixed square grids. Within each fixed square grid, it keeps only one node staying awake to take the responsibility of sensing and communication. Each sensor node uses its GPS to associate itself with a point in the grid. Figure~\ref{gaf1} gives an example of fixed square grid in GAF. \begin{figure}[h!] \centering @@ -154,13 +164,13 @@ Xu et al. \cite{GAF} develop an algorithm, called Geographical Adaptive Fidelity \label{gaf1} \end{figure} -For two adjacent grids, (for example, A and B in figure~\ref{gaf1}) all sensor nodes inside A can communicate with sensor nodes inside B and vice versa. Therefore, all the sensor nodes are equivalent from the point of view the routing. The size of the fixed grid is based on the radio communication range $R_c$. It is supposed that the fixed grid is square with $r$ units on a side as shown in figure~\ref{gaf1}. The distance between the farthest two possible sensor nodes in two adjacent grids such as, B and C in figure~\ref{gaf1}, should not be greater than the radio communication range $R_c$. For instance, the sensor node \textbf{2} of grid B can communicate with the sensor node \textbf{5} of grid C So, +For two adjacent squares grids, (for example, A and B in Figure~\ref{gaf1}) all sensor nodes inside A can communicate with sensor nodes inside B and vice versa. Therefore, all the sensor nodes are equivalent from the point of view of the routing. The size of the fixed grid is based on the radio communication range $R_c$. It is supposed that the fixed grid is square with $r$ units on a side as shown in Figure~\ref{gaf1}. The distance between the farthest sensor nodes in two adjacent squares, such as B and C in Figure~\ref{gaf1}, should not be greater than the radio communication range $R_c$. For instance, the sensor node \textbf{2} of grid B can communicate with the sensor node \textbf{5} of square grid C. Thus, \begin{eqnarray} Distance(2,5) \leq R_c \end{eqnarray} - +and \begin{eqnarray} r^2 + \left(2r \right)^2 \leq R_c^2 \end{eqnarray} @@ -169,29 +179,34 @@ or r \leq \dfrac{R_c}{\sqrt{5}} \end{eqnarray} -The sensor nodes in GAF can be in one of the three states: active, sleep, or discovery. Figure~\ref{gaf2} shows the state transition diagram. Each sensor node is initiated with discovery state. -In discovery state, the radio of each sensor node is turned on. Thereafter, the discovery messages are exchanged among the sensor nodes within the same grid. The discovery message consists of four fields, node id, grid id, estimated node active time (enat), and node state. The node uses its location and grid size to determine the grid id. +The sensor nodes in GAF can be in one of the following three states: Active, Sleeping, or Discovery. Figure~\ref{gaf2} shows the state transition diagram. Each sensor node is initiated with discovery state. +In discovery state, the radio of each sensor node is turned on. Thereafter, the discovery messages are exchanged among the sensor nodes within the same grid. The discovery message consists of four fields, node id, grid id, estimated node active time (enat), and node state. The node uses its location and grid size to determine the square grid id. \begin{figure}[h!] \centering \includegraphics[scale=0.4]{Figures/ch2/GAF2.eps} -\caption{ Example of fixed square grid in GAF.} +\caption{ State transitions in GAF.} \label{gaf2} \end{figure} -The sensor node sets a timer to $T_d$ seconds after entering in the discovery state. As soon as the timer fires, the sensor node broadcasts its discovery message and enters the active state. The active sensor node sets a timeout value $T_a$ to define how long it can stay in the active state. After $T_a$, the sensor node will return to the discovery state. Whilst, during its active state, it re-broadcasts its discovery message at intervals $T_d$ periodically. The sensor node with discovery or active state can change its state to sleep when it detects that some other equivalent node will handle routing inside the grid. The sensor nodes in the sleeping state wake up after a sleeping time $T_s$ and go back to the discovery state. In GAF, load balancing is performed by means of periodic election of the leader (i.e., the active node that handle the routing inside the fixed grid). Inside each fixed square grid, sensor nodes collaborate with each other to play different roles. For example, nodes will elect -one sensor node (based on the remaining energy of sensor nodes inside the fixed square grid) to stay awake for a certain period of time, and then the rest go to sleep. This sensor node is responsible for monitoring and reporting data to the base station on behalf of the nodes -in the square grid. +The sensor node sets a timer to $T_d$ seconds after entering in the discovery state. As soon as the timer fires, the sensor node broadcasts its discovery message and enters the active state. The active sensor node sets a timeout value $T_a$ to define how long it can stay in the active state. After $T_a$, the sensor node will return to the discovery state. Sensor node changes its state to Discovery to give a chance to other nodes within the same grid to become Active. +%Whilst, during its active state, it re-broadcasts its discovery message at intervals $T_d$ periodically. +The sensor node with Discovery or Active state can change its state to Sleeping when it detects that some other equivalent node will handle routing inside the grid. The sensor nodes in the Sleeping state wake up after a sleeping time $T_s$ and go back to the Discovery state. In GAF, load balancing is performed by means of periodic election of the leader (i.e., the active node that handle the routing inside the fixed grid). Inside each fixed square grid, sensor nodes collaborate with each other to play different roles. For example, nodes will elect +one sensor node (based on the remaining energy of sensor nodes inside the fixed square grid) to stay awake for a certain period of time, and then the rest go to sleep. This sensor node is responsible for monitoring, routing, and reporting data to the base station on behalf of the nodes in the square grid. For nodes with same state, GAF gives nodes with longer expected lifetime (enat) higher rank, therefore they are called high rank nodes. %A rank-based election algorithm has been used to elect the leader. It is based on the remaining energy of sensor nodes inside the fixed square grid so as to extend the network lifetime. -\subsection{DESK} +\subsection{Distributed Energy-efficient Scheduling for K-coverage (DESK)} \label{ch2:sec:03:2} -The authors in~\cite{DESK} design a novel distributed heuristic, called Distributed Energy-efficient Scheduling for K-coverage (DESK), which ensures that the energy consumption among the sensors is balanced and the lifetime maximized while the coverage requirement is satisfied. This heuristic works in rounds, requires only one-hop neighbor information, and each sensor decides its status (active or sleep) based on the perimeter coverage model from~\cite{ref133}. - -DESK is based on the result from \cite{ref133}. In \cite{ref133}, the whole area is K-covered if and only if the perimeters of all sensors are k-covered. The coverage level of perimeter of a sensor $s_i$ is determined by calculating the angle corresponding to the arc that each of its neighbors covers its perimeter. Figure~\ref{figp}~(a) illuminates such arcs whilst figure~\ref{figp}~(b) shows the angles corresponding with those arcs, which were posted into the range [0,2$ \pi $]. According to figure~\ref{figp}~(b), the coverage level of sensor $s_i$ can be calculated via traversing the range from 0 to 2$ \pi $. +% The authors in~\cite{DESK} design a novel distributed heuristic, called Distributed Energy-efficient Scheduling for K-coverage (DESK), which +DESK is a novel distributed heuristic to ensure that the energy consumption among the sensors is balanced and the lifetime maximized while the coverage requirement is satisfied~\cite{DESK}. This heuristic works in rounds, it requires only one-hop neighbor information, and each sensor decides its status (Active or Sleep) based on the perimeter coverage model from~\cite{ref133}. +%DESK is based on the result from \cite{ref133}. +In DESK \cite{ref133}, the whole area is K-covered if and only if the perimeters of all sensors are K-covered. The coverage level of a sensor $s_i$ is determined by calculating the angle corresponding to the arc that each of its neighbors covers its perimeter. Figure~\ref{figp}~(a) illuminates such arcs whilst Figure~\ref{figp}~(b) shows the angles corresponding with those arcs in the range [0,2$ \pi $]. According to Figure~\ref{figp}~(a) and (b), the coverage level of sensor $s_i$ can be calculated as follows. +%via traversing the range from 0 to 2$ \pi $. +For each sensor $s_j$ such that $d(s_i,s_j)$ $<$ $2R_s$, we calculate the angle of $s_i$'s arc, denoted by [$\alpha_{j,L}$, $\alpha_{j,R}$], which is perimeter covered by $s_j$, where $\alpha= arccos(d(s_i, s_j)/2R_s)$ and $d(s_i,s_j)$ is the Euclidean distance between $s_i$ and $s_j$. After that, we locate the points $\alpha_{j,L}$ and $\alpha_{j,R}$ of each neighboring sensor $s_j$ of $s_i$ on the line segment $[0, 2\pi]$. These points are sorted in ascending order into a list L. We traverse the line segment from 0 to $2\pi$ by visiting each element in the sorted list L from the left to the right and determine the perimeter coverage of $s_i$. Whenever an element $\alpha_{j,L}$ is traversed, the level of perimeter coverage should be increased by one. Whenever an element $\alpha_{j,R}$ is traversed, the level of perimeter coverage should be decreased by one. + \begin{figure}[h!] \centering \begin{tabular}{@{}cr@{}} @@ -211,42 +226,44 @@ DESK is based on the result from \cite{ref133}. In \cite{ref133}, the whole area \label{desk} \end{figure} -Figure~\ref{desk} shows the DESK network time line. DESK works into rounds fashion. The network lifetime is divided into R rounds. Each round consists of two phases: decision phase and sensing phase. The length of round is dRound that means each sensor node executes this algorithm every dRound unit of time. The decision phase at the starting of each round should be taken within W unit of time, where $W<< dRound$ as shown in figure~\ref{desk}. All the sensor nodes should be temporarily awakened in the decision phase so as to decide its status. Every sensor node $s_i$ decides its status to be active or sleep after $w_i$ of waiting time. The waiting time $w_i$ is dynamic and it can be changed at any time based on the status of its sensor neighbors, the remaining energy $e_i$ of $s_i$, and its contribution $c_i$ in the coverage level of the network, where $c_i$ is defined as the number of the neighbors $n_i$ who need $s_i$ to be active. The waiting time is defined as follow +Figure~\ref{desk} shows the DESK network time line. DESK works into rounds fashion. The network lifetime is divided into R rounds. Each round consists of two phases: decision phase and sensing phase. The length of round is dRound that means that each sensor node executes this algorithm every dRound unit of time. The decision should be taken within W unit of time, where $W<< dRound$ as shown in Figure~\ref{desk}. All the sensor nodes should be temporarily awakened in the decision phase so as to decide their status. Every sensor node $s_i$ decides its status to be active or sleep after $w_i$ of waiting time. The waiting time $w_i$ of node $s_i$ is dynamic. It can be changed at any time based on the status of its neighbors, the remaining energy $e_i$ of $s_i$, and its contribution $c_i$ in the coverage level of the network. The contribution $c_i$ is defined as the number of neighbors which need $s_i$ to be active. The waiting time is defined as follows: \begin{equation} w_{i} = \left \{ \begin{array}{ll} - \dfrac{\eta}{n_i^\alpha l(e_i,r_i)^\beta} * W + z & \mbox{If $e_i \geq e_{threshold}$} \\ - W & \mbox{Otherwise.}\\ + \dfrac{\eta}{n_i^\alpha l(e_i,r_i)^\beta} * W + z & \mbox{if $e_i \geq e_{threshold}$} \\ + W & \mbox{otherwise,}\\ \end{array} \right. -%\label{eq12} \notag \end{equation} -Where $\alpha, \beta,$ and $\eta$ are constant, z is a random number between [0; d], where d is a time slot, to avoid the case where two sensors having the same $w_i$ to be active at the same time. $l(e_i, r_i)$ is the function computing the lifetime of sensor $s_i$ in terms of its current remaining energy $e_i$ and its sensing range $r_i$. -DESK uses two types of messages, mACTIVATE message by which a sensor informs others that it becomes active, and mASK2SLEEP by which a sensor suggests a neighbor to go to sleep due to its uselessness. The concept of uselessness or a redundant neighbor refers to one that does not contribute to the perimeter coverage of the considered sensor. This means that the segment of the perimeter of the considered sensor overlapping with that neighbor is already covered by active sensors. +where $\alpha, \beta,$ and $\eta$ are constant, z is a random number between [0; d], where d is a time duration, to avoid the case where two sensors are active at the same time. $l(e_i, r_i)$ is the function computing the lifetime of sensor $s_i$ in terms of its current remaining energy $e_i$ and its sensing range $r_i$. +DESK uses two types of messages, mACTIVATE message by which a sensor informs others that it becomes active, and mASK2SLEEP by which a sensor suggests a neighbor to go to sleep due to its uselessness. The concept of uselessness (or redundancy) of a neighbor means that this neighbor does not contribute to the perimeter coverage of the considered sensor. That is to say that the segment of the perimeter of the considered sensor overlapping with that neighbor is already covered by active sensors. -Typically, the algorithm works as follows. At the beginning of each round, no sensors are active. All sensors are in listening mode, i.e. all wait for the time to make a decision while still doing sensing job. All the sensor nodes collect the information (coordinates, current residual energy, and sensing range) from the one-hop neighbors. Each sensor stores this information into a list L in the increasing order of the angle $\alpha $ . Each sensor node set its timer to $w_i$ and initially it is proposed that all of its neighbors need it to join the network. When the sensor node $s_j$ joins the network, it broadcasts a mACTIVATE message to inform all of its 1-hop neighbors about its status change. Its neighbors execute the perimeter coverage model to recalculate its coverage level. If it finds any neighbor u that is useless in covering its perimeter, i.e., the perimeter that u covers is covered by other active neighbors, it will send mASK2SLEEP message to that sensor u. When the sensor node receives mASK2SLEEP message, it updates its counter $n_i$, contribution $c_i$ to coverage level, and recalculate waiting time $w_i$. It then -check if its $n_i$ is decreased to 0 or not. If $n_i$ of a sensor node is 0 (i.e., it receives mASK2SLEEP message from all of its neighbors), then it will send message mGOSLEEP to all of its neighbors telling them that it is about to go to sleep, and set a timer $R_i$ for waking up in next round and at last go to sleep. If the sensor node receives mGOSLEEP message, it removes the neighbor sending that message out of its list L. All the sensors have to decide its status in the decision phase. After that, the active sensors perform the sensing task during the sensing phase. -The period the average +Typically, the algorithm works as follows. At the beginning of each round, there is no active sensor. All sensors are in listening mode, i.e. they wait for the time to make a decision while still doing sensing job. All the sensor nodes collect the information (coordinates, current residual energy, and sensing range) from the one-hop neighbors. Each sensor stores this information into a list L in the increasing order of the angle $\alpha $. Each sensor sets its timer $w_i$ with the assumption that all of its neighbors need it to join the network. When the sensor node $s_j$ joins the network, it broadcasts a mACTIVATE message to inform all of its one hop neighbors about its status change. Its neighbors execute the perimeter coverage model to recalculate their coverage level. If a node finds any neighbor u that is useless in covering its perimeter, i.e., the perimeter that u covers is covered by other active neighbors, it will send mASK2SLEEP message to that sensor u. When the sensor node receives mASK2SLEEP message, it updates its counter $n_i$, contribution $c_i$ to coverage level, and recalculate waiting time $w_i$. It then +check if its $n_i$ is decreased to 0 or not. If $n_i$ of a sensor node is 0 (i.e., it receives mASK2SLEEP message from all of its neighbors), then it will send message mGOSLEEP to all of its neighbors informing them that it is about to go to sleep, and set a timer $R_i$ for waking up in next round and at last go to sleep. If a sensor node receives a mGOSLEEP message, it removes the neighbor sending that message out of its list L. All the sensors have to decide their status in the decision phase. After that, the active sensors perform the sensing task during the sensing phase. +%The period the average -\begin{table} +\begin{table}[H] \begin{flushleft} \centering -\caption{Main characteristics of some coverage approaches in previous literatures.} +\caption{Main characteristics of some coverage approaches in literature.} +\label{x11} \begin{tabular}{@{} cl*{13}c @{}} + & & \\ & & \multicolumn{10}{c}{Characteristics} \\[2ex] - \multicolumn{2}{c}{\footnotesize Coverage Approach} & \mcrot{1}{l}{50}{\footnotesize Distributed} & \mcrot{1}{l}{50}{\footnotesize Centralized} & \mcrot{1}{l}{50}{ \footnotesize Area coverage} & \mcrot{1}{l}{50}{\footnotesize Target coverage} & \mcrot{1}{l}{50}{\footnotesize k-coverage} & \mcrot{1}{l}{50}{\footnotesize Heterogeneous nodes}& \mcrot{1}{l}{50}{\footnotesize Homogeneous nodes} & \mcrot{1}{l}{50}{\footnotesize Disjoint sets} & \mcrot{1}{l}{50}{\footnotesize Non-Disjoint sets} & \mcrot{1}{l}{50}{\footnotesize SET K-COVER } & \mcrot{1}{l}{50}{\footnotesize Work in Rounds} & \mcrot{1}{l}{50}{\footnotesize Adjustable Radius} \\ + \multicolumn{2}{c}{\footnotesize Coverage Approach} & \mcrot{1}{l}{50}{\footnotesize Distributed} & \mcrot{1}{l}{50}{\footnotesize Centralized} & \mcrot{1}{l}{50}{ \footnotesize Area coverage} & \mcrot{1}{l}{50}{\footnotesize Target coverage} & \mcrot{1}{l}{50}{\footnotesize k-coverage} & \mcrot{1}{l}{50}{\footnotesize Heterogeneous nodes}& \mcrot{1}{l}{50}{\footnotesize Homogeneous nodes} & \mcrot{1}{l}{50}{\footnotesize Disjoint sets} & \mcrot{1}{l}{50}{\footnotesize Non-Disjoint sets} & \mcrot{1}{l}{50}{\footnotesize SET K-COVER } & \mcrot{1}{l}{50}{\footnotesize Work in Rounds or Periods} & \mcrot{1}{l}{50}{\footnotesize Adjustable Radius} \\ \cmidrule[1pt]{2-14} +& \tiny J. M. Bahi et al. (2008)~\cite{ref236,ref237} & \OK & & \OK & & & & \OK & \OK & & & & &\\ -& \tiny Z. Abrams et al. (2004)~\cite{ref114} & \OK &\OK & \OK & & & &\OK & \OK & & \OK & & &\\ +%& \tiny Z. Abrams et al. (2004)~\cite{ref114} & \OK &\OK & \OK & & & &\OK & \OK & & \OK & & &\\ -& \tiny M. Cardei and D. Du (2005)~\cite{ref115} & & \OK & & \OK & & & \OK & \OK & & \OK & & &\\ +%& \tiny M. Cardei and D. Du (2005)~\cite{ref115} & & \OK & & \OK & & & \OK & \OK & & \OK & & &\\ & \tiny S. Slijepcevic and M. Potkonjak (2001)~\cite{ref116} & & \OK & \OK & & & & \OK & \OK & & \OK & & &\\ @@ -328,33 +345,27 @@ The period the average & \tiny X. Deng et al. (2005)~\cite{ref133} & \OK & & \OK & & \OK & & \OK & & \OK & & & &\\ -&\textbf{\textcolor{red}{ \tiny DiLCO Protocol (2014)}} & \textbf{\textcolor{red}{\OK}} & & \textbf{\textcolor{red}{\OK}} & & & \textbf{\textcolor{red}{\OK}} & \textbf{\textcolor{red}{\OK}} & & & &\textbf{\textcolor{red}{\OK}} & & \\ +&\textbf{\textcolor{red}{ \tiny DiLCO Protocol (2015)}} & \textbf{\textcolor{red}{\OK}} & & \textbf{\textcolor{red}{\OK}} & & & \textbf{\textcolor{red}{\OK}} & \textbf{\textcolor{red}{\OK}} & & \textbf{\textcolor{red}{\OK}} & &\textbf{\textcolor{red}{\OK}} & & \\ -&\textbf{\textcolor{red}{ \tiny MuDiLCO Protocol (2014)}} & \textbf{\textcolor{red}{\OK}} & & \textbf{\textcolor{red}{\OK}} & & & \textbf{\textcolor{red}{\OK}} & \textbf{\textcolor{red}{\OK}} & & & \textbf{\textcolor{red}{\OK}} &\textbf{\textcolor{red}{\OK}} & & \\ +&\textbf{\textcolor{red}{ \tiny MuDiLCO Protocol (2015)}} & \textbf{\textcolor{red}{\OK}} & & \textbf{\textcolor{red}{\OK}} & & & \textbf{\textcolor{red}{\OK}} & \textbf{\textcolor{red}{\OK}} & & \textbf{\textcolor{red}{\OK}} & \textbf{\textcolor{red}{\OK}} &\textbf{\textcolor{red}{\OK}} & & \\ -&\textbf{\textcolor{red}{ \tiny PeCO Protocol (2015)}} & \textbf{\textcolor{red}{\OK}} & & \textbf{\textcolor{red}{\OK}} & & & \textbf{\textcolor{red}{\OK}} & \textbf{\textcolor{red}{\OK}} & & & &\textbf{\textcolor{red}{\OK}} & & \\ +&\textbf{\textcolor{red}{ \tiny PeCO Protocol (2015)}} & \textbf{\textcolor{red}{\OK}} & & \textbf{\textcolor{red}{\OK}} & & & \textbf{\textcolor{red}{\OK}} & \textbf{\textcolor{red}{\OK}} & & \textbf{\textcolor{red}{\OK}} & &\textbf{\textcolor{red}{\OK}} & & \\ \cmidrule[1pt]{2-14} \end{tabular} \end{flushleft} - -\label{Table1:ch2} -\end{table} +\end{table} \section{Conclusion} \label{ch2:sec:05} -This chapter describes some coverage proposed problems in the literature, with their assumptions and proposed solutions. -The coverage problem is considered as an essential requirement for many applications in WSNs because the better coverage of an area of interest provides better sensing measurements of the physical phenomenon. Therefore, many extensive researches have been focused on that problem. These researches aim at designing mechanisms that efficiently manage or schedule the sensors after deployment, since sensor nodes have a limited battery life. -Many coverage algorithms for maintaining the coverage and improving the network lifetime in WSNs were proposed. On one hand, the full centralized coverage algorithms provide optimal or near optimal solution with low computation power but they deplete the battery power due to the communication overhead, as well as they are not scalable for large size networks. On the other hand, the distributed coverage algorithms provide a lower quality solution in comparison with centralized approaches and consume more power for computation but they are reliable, scalable, and provide low communication overhead in WSNs. -%Whatever the case, this would result in a lower lifetime coverage in WSNs. -As shown in table \ref{Table0:ch2}, each of the two coverage approaches has advantages and disadvantages. Therefore, each approach can be used based on the application requirements. We conclude from this chapter that it is desirable to introduce an hybrid approach to take into account the advantages of both centralized and distributed coverage approaches. This hybrid approaches can provide a good quality coverage and prolong the network lifetime. - - +This chapter describes some coverage problems in the literature, with their assumptions and proposed solutions. +The coverage is considered as an essential requirement for many applications in WSNs because the better the coverage of an area of interest is, the better the sensing measurements of the physical phenomenon also is. Therefore, many extensive researches have been focused on that problem. These researches aim at designing mechanisms that efficiently manage or schedule the sensors after deployment, since sensor nodes have a limited battery life. +Many coverage algorithms for maintaining the coverage and improving the network lifetime in WSNs were proposed. On the one hand, the full centralized coverage algorithms provide optimal or near optimal solution with low computation power for the sensors (except for the base station) but they deplete the battery power due to the communication overhead, so they are not scalable for large size networks. On the other hand, the distributed coverage algorithms provide a lower quality solution in comparison with centralized approaches and the communication between neighbors may be large especially for dense networks. Distributed coverage algorithms are reliable and scalable. The two coverage approaches have advantages and disadvantages. Therefore, each approach can be used based on the application requirements. We conclude from this chapter that it is desirable to introduce an hybrid approach to take into account the advantages of both centralized and distributed coverage approaches. Such an hybrid approach can provide a good quality coverage and prolong the network lifetime. @@ -371,11 +382,8 @@ As shown in table \ref{Table0:ch2}, each of the two coverage approaches has adva -%Many coverage algorithms for maintaining the coverage and improving the network lifetime in WSNs were proposed. On one hand, the centralized coverage algorithms provide optimal or near optimal solution with low computation power but they deplete the battery power due to the communication overhead, as well as they are not scalable for large size networks. On the other hand, the distributed coverage algorithms provide a lower quality solution in comparison with centralized approaches and consume more power for computation but they are reliable, scalable, and provide low communication overhead on the WSNs. - %none of them ensure the coverage for the sensing field with optimal minimum number of active sensor nodes, and for a long time as possible. In full centralized algorithms, the optimal solutions can be given by using optimization approaches, but in the same time, a high energy is consumed for the execution time of the algorithm and the communications among the sensors in the sensing field. Therefore, the full centralized approaches are not a good candidate to be used especially in large WSNs. Whilst, a fully distributed algorithms can not give optimal solutions because these algorithms use only local information of the neighboring sensors, but in the same time, the energy consumption during the communications and executing the algorithm is highly lower. Whatever the case, this would result in a shorter lifetime coverage in WSNs -% Several centralized approaches have been demonstrated, where they are concentrated on modeling the coverage problem and provide the maximum cover set so as to extend the network lifetime. The proposed algorithms are executed in a central node and based on global information. The central node transmits the resulted schedule to other nodes in the network. Even if the centralized algorithms have been produced optimal or near optimal solutions, It seems to be difficult and unpractical to apply the full centralized approaches in WSNs. On the other hand, many distributed algorithms have been described. These approaches seem to be more realistic to be used in WSNs from point of view of designer, but they can not assure optimal or near optimal solutions so as to extend the network lifetime as long time as possible.