X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/UIC2013.git/blobdiff_plain/a96a3c20a51ff741b0b50ea566e9fab35f311ea1..39b48f032a13fc6524febe806cb05a43d808f416:/bare_conf.tex diff --git a/bare_conf.tex b/bare_conf.tex old mode 100755 new mode 100644 index 50c500f..252ea78 --- a/bare_conf.tex +++ b/bare_conf.tex @@ -1,17 +1,19 @@ - \documentclass[conference]{IEEEtran} + \ifCLASSINFOpdf - + \else - + \fi \hyphenation{op-tical net-works semi-conduc-tor} -\usepackage{float} + +\usepackage{etoolbox} +\usepackage{float} \usepackage{epsfig} \usepackage{calc} \usepackage{times,amssymb,amsmath,latexsym} @@ -32,345 +34,215 @@ \usepackage{epsfig} \usepackage{caption} \usepackage{multicol} +\usepackage{times} +\usepackage{graphicx,epstopdf} +\epstopdfsetup{suffix=} +\DeclareGraphicsExtensions{.ps} +\DeclareGraphicsRule{.ps}{pdf}{.pdf}{`ps2pdf -dEPSCrop -dNOSAFER #1 \noexpand\OutputFile} \begin{document} - -\title{Energy-Efficient Activity Scheduling in Heterogeneous Energy Wireless Sensor Networks} - -% author names and affiliations -% use a multiple column layout for up to three different -% affiliations -\author{\IEEEauthorblockN{Ali Kadhum Idrees, Karine Deschinkel, Michel Salomon, and Rapha\"el Couturier } -\IEEEauthorblockA{FEMTO-ST Institute, UMR 6174 CNRS, University of Franche-Comte, Belfort, France \\ -Email: ali.idness@edu.univ-fcomte.fr, $\lbrace$karine.deschinkel, michel.salomon, raphael.couturier$\rbrace$@univ-fcomte.fr} -%\email{\{ali.idness, karine.deschinkel, michel.salomon, raphael.couturier\}@univ-fcomte.fr} -%\and -%\IEEEauthorblockN{Homer Simpson} -%\IEEEauthorblockA{FEMTO-ST Institute, UMR CNRS, University of Franche-Comte, Belfort, France} -%\and -%\IEEEauthorblockN{James Kirk\\ and Montgomery Scott} -%\IEEEauthorblockA{FEMTO-ST Institute, UMR CNRS, University of Franche-Comte, Belfort, France} -} +% +% paper title +% can use linebreaks \\ within to get better formatting as desired +\title{Coverage and Lifetime Optimization \\ +in Heterogeneous Energy Wireless Sensor Networks} + +\author{\IEEEauthorblockN{Ali Kadhum Idrees, Karine Deschinkel, Michel Salomon, +and Rapha\"el Couturier} +\IEEEauthorblockA{FEMTO-ST Institute, UMR 6174 CNRS \\ +University of Franche-Comt\'e \\ +Belfort, France\\ +Email: ali.idness@edu.univ-fcomte.fr, $\lbrace$karine.deschinkel, michel.salomon, +raphael.couturier$\rbrace$@univ-fcomte.fr}} \maketitle \begin{abstract} One of the fundamental challenges in Wireless Sensor Networks (WSNs) -is the coverage preservation and the extension of the network lifetime +is the coverage preservation and the extension of the network lifetime continuously and effectively when monitoring a certain area (or -region) of interest. In this paper a coverage optimization protocol to -improve the lifetime in heterogeneous energy wireless sensor networks -is proposed. The area of interest is first divided into subregions -using a divide-and-conquer method and then the scheduling of sensor node -activity is planned for each subregion. The proposed scheduling -considers rounds during which a small number of nodes, remaining -active for sensing, is selected to ensure coverage. Each round -consists of four phases: (i)~Information Exchange, (ii)~Leader +region) of interest. In this paper, a coverage optimization protocol +to improve the lifetime in heterogeneous energy wireless sensor +networks is proposed. The area of interest is first divided into +subregions using a divide-and-conquer method and then the scheduling +of sensor node activity is planned for each subregion. The proposed +scheduling considers rounds during which a small number of nodes, +remaining active for sensing, is selected to ensure coverage. Each +round consists of four phases: (i)~Information Exchange, (ii)~Leader Election, (iii)~Decision, and (iv)~Sensing. The decision process is -carried out by a leader node which solves an integer program. +carried out by a leader node, which solves an integer program. Simulation results show that the proposed approach can prolong the network lifetime and improve the coverage performance. \end{abstract} -%\keywords{Area Coverage, Wireless Sensor Networks, lifetime Optimization, Distributed Protocol.} +\begin{IEEEkeywords} +Wireless Sensor Networks, Area Coverage, Network lifetime, +Optimization, Scheduling. +\end{IEEEkeywords} +%\keywords{Area Coverage, Network lifetime, Optimization, Distributed Protocol} \IEEEpeerreviewmaketitle \section{Introduction} -\noindent Recent years have witnessed significant advances in wireless -communications and embedded micro-sensing MEMS technologies which have -led to the emergence of wireless sensor networks as one of the most promising -technologies~\cite{asc02}. In fact, they present huge potential in -several domains ranging from health care applications to military -applications. A sensor network is composed of a large number of tiny -sensing devices deployed in a region of interest. Each device has -processing and wireless communication capabilities, which enable it to -sense its environment, to compute, to store information and to deliver -report messages to a base station. -%These sensor nodes run on batteries with limited capacities. To achieve a long life of the network, it is important to conserve battery power. Therefore, lifetime optimisation is one of the most critical issues in wireless sensor networks. -One of the main design issues in Wireless Sensor Networks (WSNs) is to -prolong the network lifetime, while achieving acceptable quality of -service for applications. Indeed, sensor nodes have limited resources -in terms of memory, energy and computational power. - -Since sensor nodes have limited battery life and without being able to -replace batteries, especially in remote and hostile environments, it -is desirable that a WSN should be deployed with high density because -spatial redundancy can then be exploited to increase the lifetime of -the network. In such a high density network, if all sensor nodes were -to be activated at the same time, the lifetime would be reduced. To -extend the lifetime of the network, the main idea is to take advantage -of the overlapping sensing regions of some sensor nodes to save -energy by turning off some of them during the sensing phase. -Obviously, the deactivation of nodes is only relevant if the coverage -of the monitored area is not affected. Consequently, future softwares -may need to adapt appropriately to achieve acceptable quality of -service for applications. In this paper we concentrate on the area +\noindent The fast developments in the low-cost sensor devices and +wireless communications have allowed the emergence the WSNs. WSN +includes a large number of small , limited-power sensors that can +sense, process and transmit data over a wireless communication . They +communicate with each other by using multi-hop wireless communications +, cooperate together to monitor the area of interest, and the measured +data can be reported to a monitoring center called, sink, for analysis +it~\cite{Ammari01, Sudip03}. There are several applications used the +WSN including health, home, environmental, military,and industrial +applications~\cite{Akyildiz02}. The coverage problem is one of the +fundamental challenges in WSNs~\cite{Nayak04} that consists in +monitoring efficiently and continuously the area of interest. The +limited energy of sensors represents the main challenge in the WSNs +design~\cite{Ammari01}, where it is difficult to replace and/or +recharge their batteries because the the area of interest nature (such +as hostile environments) and the cost. So, it is necessary that a WSN +deployed with high density because spatial redundancy can then be +exploited to increase the lifetime of the network . However, turn on +all the sensor nodes, which monitor the same region at the same time +leads to decrease the lifetime of the network. To extend the lifetime +of the network, the main idea is to take advantage of the overlapping +sensing regions of some sensor nodes to save energy by turning off +some of them during the sensing phase~\cite{Misra05}. WSNs require +energy-efficient solutions to improve the network lifetime that is +constrained by the limited power of each sensor node +~\cite{Akyildiz02}. In this paper, we concentrate on the area coverage problem, with the objective of maximizing the network lifetime by using an adaptive scheduling. The area of interest is divided into subregions and an activity scheduling for sensor nodes is -planned for each subregion. - In fact, the nodes in a subregion can be seen as a cluster where - each node sends sensing data to the cluster head or the sink node. - Furthermore, the activities in a subregion/cluster can continue even - if another cluster stops due to too many node failures. -Our scheduling scheme considers rounds, where a round starts with a -discovery phase to exchange information between sensors of the -subregion, in order to choose in a suitable manner a sensor node to -carry out a coverage strategy. This coverage strategy involves the -solving of an integer program which provides the activation of the -sensors for the sensing phase of the current round. +planned for each subregion. In fact, the nodes in a subregion can be +seen as a cluster where each node sends sensing data to the cluster +head or the sink node. Furthermore, the activities in a +subregion/cluster can continue even if another cluster stops due to +too many node failures. Our scheduling scheme considers rounds, where +a round starts with a discovery phase to exchange information between +sensors of the subregion, in order to choose in a suitable manner a +sensor node to carry out a coverage strategy. This coverage strategy +involves the solving of an integer program, which provides the +activation of the sensors for the sensing phase of the current round. The remainder of the paper is organized as follows. The next section % Section~\ref{rw} reviews the related work in the field. Section~\ref{pd} is devoted to the scheduling strategy for energy-efficient coverage. -Section~\ref{cp} gives the coverage model formulation which is used to -schedule the activation of sensors. Section~\ref{exp} shows the -simulation results obtained using the discrete event simulator on -OMNET++ \cite{varga}. They fully demonstrate the usefulness of the -proposed approach. Finally, we give concluding remarks and some -suggestions for future works in Section~\ref{sec:conclusion}. +Section~\ref{cp} gives the coverage model formulation, which is used +to schedule the activation of sensors. Section~\ref{exp} shows the +simulation results obtained using the discrete event simulator OMNeT++ +\cite{varga}. They fully demonstrate the usefulness of the proposed +approach. Finally, we give concluding remarks and some suggestions +for future works in Section~\ref{sec:conclusion}. \section{Related works} \label{rw} - -\noindent This section is dedicated to the various approaches proposed -in the literature for the coverage lifetime maximization problem, -where the objective is to optimally schedule sensors' activities in -order to extend network lifetime in a randomly deployed network. As -this problem is subject to a wide range of interpretations, we have chosen -to recall the main definitions and assumptions related to our work. - +\indent In this section, we only review some recent works dealing with +the coverage lifetime maximization problem, where the objective is to +optimally schedule sensors' activities in order to extend WSNs +lifetime. + +In \cite{chin2007} is proposed a novel distributed heuristic, called +Distributed Energy-efficient Scheduling for k-coverage (DESK), which +ensures that the energy consumption among the sensors is balanced and +the lifetime maximized while the coverage requirement is maintained. +This heuristic works in rounds, requires only 1-hop neighbor +information, and each sensor decides its status (active or sleep) +based on the perimeter coverage model proposed in +\cite{Huang:2003:CPW:941350.941367}. More recently, Shibo et +al. \cite{Shibo} expressed the coverage problem as a minimum weight +submodular set cover problem and proposed a Distributed Truncated +Greedy Algorithm (DTGA) to solve it. They take advantage from both +temporal and spatial correlations between data sensed by different +sensors, and leverage prediction, to improve the lifetime. A +Coverage-Aware Clustering Protocol (CACP), which uses a computation +method to find the cluster size minimizing the average energy +consumption rate per unit area, has been proposed by Bang et al. in +\cite{Bang}. Their protocol is based on a cost metric that selects the +redundant sensors with higher power as best candidates for cluster +heads and the active sensors that cover the area of interest the more +efficiently. + +% TO BE CONTINUED + +Zhixin et al. \cite{Zhixin} propose a Distributed Energy- Efficient +Clustering with Improved Coverage(DEECIC) algorithm which aims at +clustering with the least number of cluster heads to cover the whole +network and assigning a unique ID to each node based on local +information. In addition, this protocol periodically updates cluster +heads according to the joint information of nodes $’ $residual energy +and distribution. Although DEECIC does not require knowledge of a +node's geographic location, it guarantees full coverage of the +network. However, the protocol does not make any activity scheduling +to set redundant sensors in passive mode in order to conserve energy. + +C. Liu and G. Cao \cite{Changlei} studied how to schedule sensor +active time to maximize their coverage during a specified network +lifetime. Their objective is to maximize the spatial-temporal coverage +by scheduling sensors activity after they have been deployed. They +proposed both centralized and distributed algorithms. The distributed +parallel optimization protocol can ensure each sensor to converge to +local optimality without conflict with each other. + +S. Misra et al. \cite{Misra} proposed a localized algorithm for +coverage in sensor networks. The algorithm conserve the energy while +ensuring the network coverage by activating the subset of sensors, +with the minimum overlap area.The proposed method preserves the +network connectivity by formation of the network backbone. + +L. Zhang et al. \cite{Zhang} presented a novel distributed clustering +algorithm called Adaptive Energy Efficient Clustering (AEEC) to +maximize network lifetime. In this study, they are introduced an +optimization, which includes restricted global re-clustering, +intra-cluster node sleeping scheduling and adaptive transmission range +adjustment to conserve the energy, while connectivity and coverage is +ensured. + +J. A. Torkestani \cite{Torkestani} proposed a learning automata-based +energy-efficient coverage protocol named as LAEEC to construct the +degree-constrained connected dominating set (DCDS) in WSNs. He shows +that the correct choice of the degree-constraint of DCDS balances the +network load on the active nodes and leads to enhance the coverage and +network lifetime. + +The main contribution of our approach addresses three main questions +to build a scheduling strategy: %\begin{itemize} -%\item Area Coverage: The main objective is to cover an area. The area coverage requires -%that the sensing range of working Active nodes cover the whole targeting area, which means any -%point in target area can be covered~\cite{Mihaela02,Raymond03}. - -%\item Target Coverage: The objective is to cover a set of targets. Target coverage means that the discrete target points can be covered in any time. The sensing range of working Active nodes only monitors a finite number of discrete points in targeting area~\cite{Mihaela02,Raymond03}. - -%\item Barrier Coverage An objective to determine the maximal support/breach paths that traverse a sensor field. Barrier coverage is expressed as finding one or more routes with starting position and ending position when the targets pass through the area deployed with sensor nodes~\cite{Santosh04,Ai05}. -%\end{itemize} -{\bf Coverage} - -The most discussed coverage problems in literature can be classified -into two types \cite{ma10}: area coverage (also called full or blanket -coverage) and target coverage. An area coverage problem is to find a -minimum number of sensors to work, such that each physical point in the -area is within the sensing range of at least one working sensor node. -Target coverage problem is to cover only a finite number of discrete -points called targets. This type of coverage has mainly military -applications. Our work will concentrate on the area coverage by design -and implementation of a strategy which efficiently selects the active -nodes that must maintain both sensing coverage and network -connectivity and at the same time improve the lifetime of the wireless -sensor network. But requiring that all physical points of the -considered region are covered may be too strict, especially where the -sensor network is not dense. Our approach represents an area covered -by a sensor as a set of primary points and tries to maximize the total -number of primary points that are covered in each round, while -minimizing overcoverage (points covered by multiple active sensors -simultaneously). - -{\bf Lifetime} - -Various definitions exist for the lifetime of a sensor -network~\cite{die09}. The main definitions proposed in the literature are -related to the remaining energy of the nodes or to the coverage percentage. -The lifetime of the network is mainly defined as the amount -of time during which the network can satisfy its coverage objective (the -amount of time that the network can cover a given percentage of its -area or targets of interest). In this work, we assume that the network -is alive until all nodes have been drained of their energy or the -sensor network becomes disconnected, and we measure the coverage ratio -during the WSN lifetime. Network connectivity is important because an -active sensor node without connectivity towards a base station cannot -transmit information on an event in the area that it monitors. - -{\bf Activity scheduling} - -Activitiy scheduling is to schedule the activation and deactivation of -sensor nodes. The basic objective is to decide which sensors are in -what states (active or sleeping mode) and for how long, so that the -application coverage requirement can be guaranteed and the network -lifetime can be prolonged. Various approaches, including centralized, -distributed, and localized algorithms, have been proposed for activity -scheduling. In distributed algorithms, each node in the network -autonomously makes decisions on whether to turn on or turn off itself -only using local neighbor information. In centralized algorithms, a -central controller (a node or base station) informs every sensors of -the time intervals to be activated. - -{\bf Distributed approaches} - -Some distributed algorithms have been developed -in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02} to perform the -scheduling. Distributed algorithms typically operate in rounds for -a predetermined duration. At the beginning of each round, a sensor -exchanges information with its neighbors and makes a decision to either -remain turned on or to go to sleep for the round. This decision is -basically made on simple greedy criteria like the largest uncovered -area \cite{Berman05efficientenergy}, maximum uncovered targets -\cite{1240799}. In \cite{Tian02}, the scheduling scheme is divided -into rounds, where each round has a self-scheduling phase followed by -a sensing phase. Each sensor broadcasts a message containing the node ID -and the node location to its neighbors at the beginning of each round. A -sensor determines its status by a rule named off-duty eligible rule -which tells him to turn off if its sensing area is covered by its -neighbors. A back-off scheme is introduced to let each sensor delay -the decision process with a random period of time, in order to avoid -simultaneous conflicting decisions between nodes and lack of coverage on any area. -\cite{Prasad:2007:DAL:1782174.1782218} defines a model for capturing -the dependencies between different cover sets and proposes localized -heuristic based on this dependency. The algorithm consists of two -phases, an initial setup phase during which each sensor computes and -prioritizes the covers and a sensing phase during which each sensor -first decides its on/off status, and then remains on or off for the -rest of the duration. Authors in \cite{chin2007} propose a novel -distributed heuristic named Distributed Energy-efficient Scheduling -for k-coverage (DESK) so that the energy consumption among all the -sensors is balanced, and network lifetime is maximized while the -coverage requirement is being maintained. This algorithm works in -round, requires only 1-sensing-hop-neighbor information, and a sensor -decides its status (active/sleep) based on its perimeter coverage -computed through the k-Non-Unit-disk coverage algorithm proposed in -\cite{Huang:2003:CPW:941350.941367}. - -Some other approaches do not consider a synchronized and predetermined -period of time where the sensors are active or not. Indeed, each -sensor maintains its own timer and its wake-up time is randomized -\cite{Ye03} or regulated \cite{cardei05} over time. -%A ecrire \cite{Abrams:2004:SKA:984622.984684}p33 - -%The scheduling information is disseminated throughout the network and only sensors in the active state are responsible -%for monitoring all targets, while all other nodes are in a low-energy sleep mode. The nodes decide cooperatively which of them will remain in sleep mode for a certain -%period of time. - - %one way of increasing lifeteime is by turning off redundant nodes to sleep mode to conserve energy while active nodes provide essential coverage, which improves fault tolerance. - -%In this paper we focus on centralized algorithms because distributed algorithms are outside the scope of our work. Note that centralized coverage algorithms have the advantage of requiring very low processing power from the sensor nodes which have usually limited processing capabilities. Moreover, a recent study conducted in \cite{pc10} concludes that there is a threshold in terms of network size to switch from a localized to a centralized algorithm. Indeed the exchange of messages in large networks may consume a considerable amount of energy in a localized approach compared to a centralized one. - -{\bf Centralized approaches} - -Power efficient centralized schemes differ according to several -criteria \cite{Cardei:2006:ECP:1646656.1646898}, such as the coverage -objective (target coverage or area coverage), the node deployment -method (random or deterministic) and the heterogeneity of sensor nodes -(common sensing range, common battery lifetime). The major approach is -to divide/organize the sensors into a suitable number of set covers -where each set completely covers an interest region and to activate -these set covers successively. - -The first algorithms proposed in the literature consider that the cover -sets are disjoint: a sensor node appears in exactly one of the -generated cover sets. For instance, Slijepcevic and Potkonjak -\cite{Slijepcevic01powerefficient} propose an algorithm which -allocates sensor nodes in mutually independent sets to monitor an area -divided into several fields. Their algorithm builds a cover set by -including in priority the sensor nodes which cover critical fields, -that is to say fields that are covered by the smallest number of -sensors. The time complexity of their heuristic is $O(n^2)$ where $n$ -is the number of sensors. \cite{cardei02}~describes a graph coloring -technique to achieve energy savings by organizing the sensor nodes -into a maximum number of disjoint dominating sets which are activated -successively. The dominating sets do not guarantee the coverage of the -whole region of interest. Abrams et -al.~\cite{Abrams:2004:SKA:984622.984684} design three approximation -algorithms for a variation of the set k-cover problem, where the -objective is to partition the sensors into covers such that the number -of covers that includes an area, summed over all areas, is maximized. -Their work builds upon previous work -in~\cite{Slijepcevic01powerefficient} and the generated cover sets do -not provide complete coverage of the monitoring zone. - -%examine the target coverage problem by disjoint cover sets but relax the requirement that every cover set monitor all the targets and try to maximize the number of times the targets are covered by the partition. They propose various algorithms and establish approximation ratio. - -In~\cite{Cardei:2005:IWS:1160086.1160098}, the authors propose a -heuristic to compute the disjoint set covers (DSC). In order to -compute the maximum number of covers, they first transform DSC into a -maximum-flow problem, which is then formulated as a mixed integer -programming problem (MIP). Based on the solution of the MIP, they -design a heuristic to compute the final number of covers. The results -show a slight performance improvement in terms of the number of -produced DSC in comparison to~\cite{Slijepcevic01powerefficient}, but -it incurs higher execution time due to the complexity of the mixed -integer programming solving. %Cardei and Du -\cite{Cardei:2005:IWS:1160086.1160098} propose a method to efficiently -compute the maximum number of disjoint set covers such that each set -can monitor all targets. They first transform the problem into a -maximum flow problem which is formulated as a mixed integer -programming (MIP). Then their heuristic uses the output of the MIP to -compute disjoint set covers. Results show that this heuristic -provides a number of set covers slightly larger compared to -\cite{Slijepcevic01powerefficient} but with a larger execution time -due to the complexity of the mixed integer programming resolution. -Zorbas et al. \cite{Zorbas2007} present B\{GOP\}, a centralized -coverage algorithm introducing sensor candidate categorization -depending on their coverage status and the notion of critical target -to call targets that are associated with a small number of -sensors. The total running time of their heuristic is $0(m n^2)$ where -$n$ is the number of sensors, and $m$ the number of targets. Compared -to algorithm's results of Slijepcevic and Potkonjak -\cite{Slijepcevic01powerefficient}, their heuristic produces more -cover sets with a slight growth rate in execution time. -%More recently Manju and Pujari\cite{Manju2011} - -In the case of non-disjoint algorithms \cite{Manju2011}, sensors may -participate in more than one cover set. In some cases this may -prolong the lifetime of the network in comparison to the disjoint -cover set algorithms, but designing algorithms for non-disjoint cover -sets generally induces a higher order of complexity. Moreover, in -case of a sensor's failure, non-disjoint scheduling policies are less -resilient and less reliable because a sensor may be involved in more -than one cover sets. For instance, Cardei et al.~\cite{cardei05bis} -present a linear programming (LP) solution and a greedy approach to -extend the sensor network lifetime by organizing the sensors into a -maximal number of non-disjoint cover sets. Simulation results show -that by allowing sensors to participate in multiple sets, the network -lifetime increases compared with related -work~\cite{Cardei:2005:IWS:1160086.1160098}. In~\cite{berman04}, the -authors have formulated the lifetime problem and suggested another -(LP) technique to solve this problem. A centralized solution based on the Garg-K\"{o}nemann -algorithm~\cite{garg98}, probably near -the optimal solution, is also proposed. - -{\bf Our contribution} - -There are three main questions which should be addressed to build a -scheduling strategy. We give a brief answer to these three questions -to describe our approach before going into details in the subsequent -sections. -\begin{itemize} -\item {\bf How must the phases for information exchange, decision and +%\item +{\bf How must the phases for information exchange, decision and sensing be planned over time?} Our algorithm divides the time line into a number of rounds. Each round contains 4 phases: Information Exchange, Leader Election, Decision, and Sensing. -\item {\bf What are the rules to decide which node has to be turned on +%\item +{\bf What are the rules to decide which node has to be turned on or off?} Our algorithm tends to limit the overcoverage of points of interest to avoid turning on too many sensors covering the same areas at the same time, and tries to prevent undercoverage. The decision is a good compromise between these two conflicting objectives. -\item {\bf Which node should make such a decision?} As mentioned in +%\item +{\bf Which node should make such a decision?} As mentioned in \cite{pc10}, both centralized and distributed algorithms have their own advantages and disadvantages. Centralized coverage algorithms have the advantage of requiring very low processing power from the - sensor nodes which have usually limited processing capabilities. + sensor nodes, which have usually limited processing capabilities. Distributed algorithms are very adaptable to the dynamic and scalable nature of sensors network. Authors in \cite{pc10} conclude that there is a threshold in terms of network size to switch from a - localized to a centralized algorithm. Indeed the exchange of + localized to a centralized algorithm. Indeed, the exchange of messages in large networks may consume a considerable amount of - energy in a localized approach compared to a centralized one. Our + energy in a centralized approach compared to a distributed one. Our work does not consider only one leader to compute and to broadcast - the scheduling decision to all the sensors. When the network size - increases, the network is divided into many subregions and the + the scheduling decision to all the sensors. When the network size + increases, the network is divided into many subregions and the decision is made by a leader in each subregion. -\end{itemize} +%\end{itemize} + + \section{Activity scheduling} \label{pd} @@ -418,7 +290,7 @@ each phase in more details. \subsection{Information exchange phase} Each sensor node $j$ sends its position, remaining energy $RE_j$, and -the number of local neighbors $NBR_j$ to all wireless sensor nodes in +the number of local neighbours $NBR_j$ to all wireless sensor nodes in its subregion by using an INFO packet and then listens to the packets sent from other nodes. After that, each node will have information about all the sensor nodes in the subregion. In our model, the @@ -430,14 +302,14 @@ active mode. %The working phase works in rounding fashion. Each round include 3 steps described as follow : \subsection{Leader election phase} -This step includes choosing the Wireless Sensor Node Leader (WSNL) +This step includes choosing the Wireless Sensor Node Leader (WSNL), which will be responsible for executing the coverage algorithm. Each subregion in the area of interest will select its own WSNL independently for each round. All the sensor nodes cooperate to select WSNL. The nodes in the same subregion will select the leader based on the received information from all other nodes in the same subregion. The selection criteria in order of priority are: larger -number of neighbors, larger remaining energy, and then in case of +number of neighbours, larger remaining energy, and then in case of equality, larger index. \subsection{Decision phase} @@ -462,7 +334,7 @@ starting a new round. %\noindent We try to produce an adaptive scheduling which allows sensors to operate alternatively so as to prolong the network lifetime. For convenience, the notations and assumptions are described first. %The wireless sensor node use the binary disk sensing model by which each sensor node will has a certain sensing range is reserved within a circular disk called radius $R_s$. -\noindent We consider a boolean disk coverage model which is the most +\indent We consider a boolean disk coverage model which is the most widely used sensor coverage model in the literature. Each sensor has a constant sensing range $R_s$. All space points within a disk centered at the sensor with the radius of the sensing range is said to be @@ -487,7 +359,7 @@ connectivity among the working nodes in the active mode. %We choose to representEach wireless sensor node will be represented into a selected number of primary points by which we can know if the sensor node is covered or not. % Figure ~\ref{fig:cluster2} shows the selected primary points that represents the area of the sensor node and according to the sensing range of the wireless sensor node. -\noindent Instead of working with the coverage area, we consider for each +\indent Instead of working with the coverage area, we consider for each sensor a set of points called primary points. We also assume that the sensing disk defined by a sensor is covered if all the primary points of this sensor are covered. @@ -507,7 +379,7 @@ increased or decreased if necessary) as references to ensure that the monitored region of interest is covered by the selected set of sensors, instead of using all the points in the area. -\noindent We can calculate the positions of the selected primary +\indent We can calculate the positions of the selected primary points in the circle disk of the sensing range of a wireless sensor node (see figure~\ref{fig2}) as follows:\\ $(p_x,p_y)$ = point center of wireless sensor node\\ @@ -547,13 +419,13 @@ $X_{13}=( p_x + R_s * (0), p_y + R_s * (\frac{-\sqrt{2}}{2})) $. %To satisfy the coverage requirement, the set of the principal points that will represent all the sensor nodes in the monitored region as $PSET= \lbrace P_1,\ldots ,P_p, \ldots , P_{N_P^k} \rbrace $, where $N_P^k = N_{sp} * N^k $ and according to the proposed model in figure ~\ref{fig:cluster2}. These points can be used by the wireless sensor node leader which will be chosen in each region in A to build a new parameter $\alpha_{jp}$ that represents the coverage possibility for each principal point $P_p$ of each wireless sensor node $s_j$ in $A^k$ as in \eqref{eq12}:\\ -\noindent Our model is based on the model proposed by +\indent Our model is based on the model proposed by \cite{pedraza2006} where the objective is to find a maximum number of disjoint cover sets. To accomplish this goal, authors proposed an -integer program which forces undercoverage and overcoverage of targets +integer program, which forces undercoverage and overcoverage of targets to become minimal at the same time. They use binary variables $x_{jl}$ to indicate if sensor $j$ belongs to cover set $l$. In our -model, we consider binary variables $X_{j}$ which determine the +model, we consider binary variables $X_{j}$, which determine the activation of sensor $j$ in the sensing phase of the round. We also consider primary points as targets. The set of primary points is denoted by $P$ and the set of sensors by $J$. @@ -623,7 +495,7 @@ X_{j} \in \{0,1\}, &\forall j \in J sensing in the round (1 if yes and 0 if not); \item $\Theta_{p}$ : {\it overcoverage}, the number of sensors minus one that are covering the primary point $p$; -\item $U_{p}$ : {\it undercoverage}, indicates whether or not the principal point +\item $U_{p}$ : {\it undercoverage}, indicates whether or not the primary point $p$ is being covered (1 if not covered and 0 if covered). \end{itemize} @@ -631,7 +503,7 @@ The first group of constraints indicates that some primary point $p$ should be covered by at least one sensor and, if it is not always the case, overcoverage and undercoverage variables help balancing the restriction equations by taking positive values. There are two main -objectives. First we limit the overcoverage of primary points in order to +objectives. First, we limit the overcoverage of primary points in order to activate a minimum number of sensors. Second we prevent the absence of monitoring on some parts of the subregion by minimizing the undercoverage. The weights $w_\theta$ and $w_U$ must be properly chosen so as to @@ -687,12 +559,12 @@ defined by~\cite{HeinzelmanCB02} as energy consumption model for each wireless sensor node when transmitting or receiving packets. The energy of each node in a network is initialized randomly within the range 24-60~joules, and each sensor node will consume 0.2 watts during -the sensing period which will last 60 seconds. Thus, an +the sensing period, which will last 60 seconds. Thus, an active node will consume 12~joules during the sensing phase, while a sleeping node will use 0.002 joules. Each sensor node will not participate in the next round if its remaining energy is less than 12 -joules. In all experiments the parameters are set as follows: -$R_s=5m$, $w_{\Theta}=1$, and $w_{U}=|P^2|$. +joules. In all experiments, the parameters are set as follows: +$R_s=5~m$, $w_{\Theta}=1$, and $w_{U}=|P^2|$. We evaluate the efficiency of our approach by using some performance metrics such as: coverage ratio, number of active nodes ratio, energy @@ -732,7 +604,7 @@ subregion. \parskip 0pt \begin{figure}[h!] \centering -\includegraphics[scale=0.55]{TheCoverageRatio150.eps} %\\~ ~ ~(a) +\includegraphics[scale=0.5]{TheCoverageRatio150g.eps} %\\~ ~ ~(a) \caption{The impact of the number of rounds on the coverage ratio for 150 deployed nodes} \label{fig3} \end{figure} @@ -742,7 +614,7 @@ subregion. It is important to have as few active nodes as possible in each round, in order to minimize the communication overhead and maximize the network lifetime. This point is assessed through the Active Sensors -Ratio, which is defined as follows: +Ratio (ASR), which is defined as follows: \begin{equation*} \scriptsize \mbox{ASR}(\%) = \frac{\mbox{Number of active sensors @@ -754,7 +626,7 @@ for 150 deployed nodes. \begin{figure}[h!] \centering -\includegraphics[scale=0.55]{TheActiveSensorRatio150.eps} %\\~ ~ ~(a) +\includegraphics[scale=0.5]{TheActiveSensorRatio150g.eps} %\\~ ~ ~(a) \caption{The impact of the number of rounds on the active sensors ratio for 150 deployed nodes } \label{fig4} \end{figure} @@ -772,7 +644,7 @@ lifetime of the network. \subsection{The impact of the number of rounds on the energy saving ratio} In this experiment, we consider a performance metric linked to energy. -This metric, called Energy Saving Ratio, is defined by: +This metric, called Energy Saving Ratio (ESR), is defined by: \begin{equation*} \scriptsize \mbox{ESR}(\%) = \frac{\mbox{Number of alive sensors during this round}} @@ -787,7 +659,7 @@ for all three approaches and for 150 deployed nodes. %\centering % \begin{multicols}{6} \centering -\includegraphics[scale=0.55]{TheEnergySavingRatio150.eps} %\\~ ~ ~(a) +\includegraphics[scale=0.5]{TheEnergySavingRatio150g.eps} %\\~ ~ ~(a) \caption{The impact of the number of rounds on the energy saving ratio for 150 deployed nodes} \label{fig5} \end{figure} @@ -802,13 +674,13 @@ number of rounds increases the two leaders' strategy becomes the most performing one, since it takes longer to have the two subregion networks simultaneously disconnected. -\subsection{The number of stopped simulation runs} +\subsection{The percentage of stopped simulation runs} -We will now study the number of simulations which stopped due to +We will now study the percentage of simulations, which stopped due to network disconnections per round for each of the three approaches. -Figure~\ref{fig6} illustrates the average number of stopped simulation +Figure~\ref{fig6} illustrates the percentage of stopped simulation runs per round for 150 deployed nodes. It can be observed that the -simple heuristic is the approach which stops first because the nodes +simple heuristic is the approach, which stops first because the nodes are randomly chosen. Among the two proposed strategies, the centralized one first exhibits network disconnections. Thus, as explained previously, in case of the strategy with several subregions @@ -818,8 +690,8 @@ optimization participates in extending the network lifetime. \begin{figure}[h!] \centering -\includegraphics[scale=0.55]{TheNumberofStoppedSimulationRuns150.eps} -\caption{The number of stopped simulation runs compared to the number of rounds for 150 deployed nodes } +\includegraphics[scale=0.5]{TheNumberofStoppedSimulationRuns150g.eps} +\caption{The percentage of stopped simulation runs compared to the number of rounds for 150 deployed nodes } \label{fig6} \end{figure} @@ -835,7 +707,7 @@ which is obtained for 10~simulation runs, is then divided by the average number of rounds to define a metric allowing a fair comparison between networks having different densities. -Figure~\ref{fig7} illustrates the Energy Consumption for the different +Figure~\ref{fig7} illustrates the energy consumption for the different network sizes and the three approaches. The results show that the strategy with two leaders is the most competitive from the energy consumption point of view. A centralized method, like the strategy @@ -850,7 +722,7 @@ communications have a small impact on the network lifetime. \begin{figure}[h!] \centering -\includegraphics[scale=0.55]{TheEnergyConsumption.eps} +\includegraphics[scale=0.5]{TheEnergyConsumptiong.eps} \caption{The energy consumption} \label{fig7} \end{figure} @@ -866,7 +738,7 @@ on a laptop of the decision phase (solving of the optimization problem) during one round. They are given for the different approaches and various numbers of sensors. The lack of any optimization explains why the heuristic has very low execution times. Conversely, the strategy -with one leader which requires to solve an optimization problem +with one leader, which requires to solve an optimization problem considering all the nodes presents redhibitory execution times. Moreover, increasing the network size by 50~nodes multiplies the time by almost a factor of 10. The strategy with two leaders has more @@ -876,7 +748,7 @@ nodes. Overall, to be able to deal with very large networks, a distributed method is clearly required. \begin{table}[ht] -\caption{The execution time(s) vs the number of sensors} +\caption{THE EXECUTION TIME(S) VS THE NUMBER OF SENSORS} % title of Table \centering @@ -925,7 +797,7 @@ with two leaders and the simple heuristic is illustrated. %\centering % \begin{multicols}{6} \centering -\includegraphics[scale=0.5]{TheNetworkLifetime.eps} %\\~ ~ ~(a) +\includegraphics[scale=0.5]{TheNetworkLifetimeg.eps} %\\~ ~ ~(a) \caption{The network lifetime } \label{fig8} \end{figure} @@ -944,7 +816,7 @@ subdividing the sensing field into many subregions, which are managed independently and simultaneously, is the most relevant way to maximize the lifetime of a network. -\section{Conclusion and future forks} +\section{Conclusion and future works} \label{sec:conclusion} In this paper, we have addressed the problem of the coverage and the lifetime @@ -972,22 +844,24 @@ approach like the one we propose allows to reduce the difficulty of a single global optimization problem by partitioning it in many smaller problems, one per subregion, that can be solved more easily. -In future work, we plan to study and propose a coverage protocol which -computes all active sensor schedules in a single round, using +In future work, we plan to study and propose a coverage protocol, which +computes all active sensor schedules in one time, using optimization methods such as swarms optimization or evolutionary -algorithms. This single round will still consists of 4 phases, but the - decision phase will compute the schedules for several sensing phases - which, aggregated together, define a kind of meta-sensing phase. -The computation of all cover sets in one round is far more +algorithms. The round will still consist of 4 phases, but the + decision phase will compute the schedules for several sensing phases, + which aggregated together, define a kind of meta-sensing phase. +The computation of all cover sets in one time is far more difficult, but will reduce the communication overhead. - % use section* for acknowledgement %\section*{Acknowledgment} + + + \bibliographystyle{IEEEtran} \bibliography{bare_conf} -% that's all folks + \end{document}