X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/UIC2013.git/blobdiff_plain/1e354db9744c3493287fe10e62f12f5851bcf3ed..bf13dc4be2df34e44e4cf16752f8804d3e160caa:/bare_conf.tex?ds=sidebyside diff --git a/bare_conf.tex b/bare_conf.tex index 373f8b5..90482fc 100755 --- a/bare_conf.tex +++ b/bare_conf.tex @@ -36,13 +36,12 @@ \title{Energy-Efficient Activity Scheduling in Heterogeneous Energy Wireless Sensor Networks} - % author names and affiliations % use a multiple column layout for up to three different % affiliations -\author{\IEEEauthorblockN{Ali Kadhum Idrees, Karine Deschinkel, Michel Salomon, and Raphael Couturier } -\IEEEauthorblockA{FEMTO-ST Institute, UMR CNRS, University of Franche-Comte, Belfort, France \\ -Email:$\lbrace$ali.idness, karine.deschinkel, michel.salomon,raphael.couturier$\rbrace$@femto-st.fr} +\author{\IEEEauthorblockN{Ali Kadhum Idrees, Karine Deschinkel, Michel Salomon, and Rapha\"el Couturier } +\IEEEauthorblockA{FEMTO-ST Institute, UMR 6174 CNRS, University of Franche-Comte, Belfort, France \\ +Email: ali.idness@edu.univ-fcomte.fr, $\lbrace$karine.deschinkel, michel.salomon, raphael.couturier$\rbrace$@univ-fcomte.fr} %\email{\{ali.idness, karine.deschinkel, michel.salomon, raphael.couturier\}@univ-fcomte.fr} %\and %\IEEEauthorblockN{Homer Simpson} @@ -77,6 +76,7 @@ network lifetime and improve the coverage performance. \IEEEpeerreviewmaketitle \section{Introduction} + \noindent Recent years have witnessed significant advances in wireless communications and embedded micro-sensing MEMS technologies which have made emerge wireless sensor networks as one of the most promising @@ -109,13 +109,17 @@ service for applications. In this paper we concentrate on area coverage problem, with the objective of maximizing the network lifetime by using an adaptive scheduling. The area of interest is divided into subregions and an activity scheduling for sensor nodes is -planned for each subregion. Our scheduling scheme considers rounds, -where a round starts with a discovery phase to exchange information -between sensors of the subregion, in order to choose in suitable -manner a sensor node to carry out a coverage strategy. This coverage -strategy involves the resolution of an integer program which provides -the activation of the sensors for the sensing phase of the current -round. +planned for each subregion. + In fact, the nodes in a subregion can be seen as a cluster where + each node sends sensing data to the cluster head or the sink node. + Furthermore, the activities in a subregion/cluster can continue even + if another cluster stops due to too much node failures. +Our scheduling scheme considers rounds, where a round starts with a +discovery phase to exchange information between sensors of the +subregion, in order to choose in suitable manner a sensor node to +carry out a coverage strategy. This coverage strategy involves the +solving of an integer program which provides the activation of the +sensors for the sensing phase of the current round. The remainder of the paper is organized as follows. The next section % Section~\ref{rw} @@ -128,15 +132,15 @@ OMNET++ \cite{varga}. They fully demonstrate the usefulness of the proposed approach. Finally, we give concluding remarks and some suggestions for future works in Section~\ref{sec:conclusion}. -\section{\uppercase{Related work}} +\section{Related Works} \label{rw} -\noindent -This section is dedicated to the various approaches proposed in the -literature for the coverage lifetime maximization problem, where the -objective is to optimally schedule sensors' activities in order to -extend network lifetime in a randomly deployed network. As this -problem is subject to a wide range of interpretations, we suggest to -recall main definitions and assumptions related to our work. + +\noindent This section is dedicated to the various approaches proposed +in the literature for the coverage lifetime maximization problem, +where the objective is to optimally schedule sensors' activities in +order to extend network lifetime in a randomly deployed network. As +this problem is subject to a wide range of interpretations, we suggest +to recall main definitions and assumptions related to our work. %\begin{itemize} %\item Area Coverage: The main objective is to cover an area. The area coverage requires @@ -168,8 +172,6 @@ number of primary points that are covered in each round, while minimizing overcoverage (points covered by multiple active sensors simultaneously). -\newpage - {\bf Lifetime} Various definitions exist for the lifetime of a sensor @@ -183,7 +185,7 @@ is alive until all nodes have been drained of their energy or the sensor network becomes disconnected, and we measure the coverage ratio during the WSN lifetime. Network connectivity is important because an active sensor node without connectivity towards a base station cannot -transmit information on an event in the area that it monitor. +transmit information on an event in the area that it monitors. {\bf Activity scheduling} @@ -196,19 +198,19 @@ distributed, and localized algorithms, have been proposed for activity scheduling. In the distributed algorithms, each node in the network autonomously makes decisions on whether to turn on or turn off itself only using local neighbor information. In centralized algorithms, a -central controller (a node or base station) informs every sensor of +central controller (a node or base station) informs every sensors of the time intervals to be activated. {\bf Distributed approaches} Some distributed algorithms have been developed -in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02}. Distributed -algorithms typically operate in rounds for predetermined duration. At -the beginning of each round, a sensor exchange information with its -neighbors and makes a decision to either remain turned on or to go to -sleep for the round. This decision is basically based on simple greedy -criteria like the largest uncovered area -\cite{Berman05efficientenergy}, maximum uncovered targets +in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02} to perform the +scheduling. Distributed algorithms typically operate in rounds for +predetermined duration. At the beginning of each round, a sensor +exchange information with its neighbors and makes a decision to either +remain turned on or to go to sleep for the round. This decision is +basically based on simple greedy criteria like the largest uncovered +area \cite{Berman05efficientenergy}, maximum uncovered targets \cite{1240799}. In \cite{Tian02}, the scheduling scheme is divided into rounds, where each round has a self-scheduling phase followed by a sensing phase. Each sensor broadcasts a message containing node ID @@ -222,7 +224,7 @@ of the area is no longer covered. \cite{Prasad:2007:DAL:1782174.1782218} defines a model for capturing the dependencies between different cover sets and proposes localized heuristic based on this dependency. The algorithm consists of two -phases, an initial setup phase during which each sensor calculates and +phases, an initial setup phase during which each sensor computes and prioritize the covers and a sensing phase during which each sensor first decides its on/off status, and then remains on or off for the rest of the duration. Authors in \cite{chin2007} propose a novel @@ -262,10 +264,10 @@ these set covers successively. First algorithms proposed in the literature consider that the cover sets are disjoint: a sensor node appears in exactly one of the -generated cover sets. For instance Slijepcevic and Potkonjak +generated cover sets. For instance, Slijepcevic and Potkonjak \cite{Slijepcevic01powerefficient} propose an algorithm which allocates sensor nodes in mutually independent sets to monitor an area -divided into several fields. Their algorithm constructs a cover set by +divided into several fields. Their algorithm builds a cover set by including in priority the sensor nodes which cover critical fields, that is to say fields that are covered by the smallest number of sensors. The time complexity of their heuristic is $O(n^2)$ where $n$ @@ -274,7 +276,7 @@ technique to achieve energy savings by organizing the sensor nodes into a maximum number of disjoint dominating sets which are activated successively. The dominating sets do not guarantee the coverage of the whole region of interest. Abrams et -al.~\cite{Abrams:2004:SKA:984622.984684} design three approximation +al.~\cite{Abrams:2004:SKA:984622.984684} design three approximation algorithms for a variation of the set k-cover problem, where the objective is to partition the sensors into covers such that the number of covers that include an area, summed over all areas, is maximized. @@ -293,7 +295,7 @@ design a heuristic to compute the final number of covers. The results show a slight performance improvement in terms of the number of produced DSC in comparison to~\cite{Slijepcevic01powerefficient}, but it incurs higher execution time due to the complexity of the mixed -integer programming resolution. %Cardei and Du +integer programming solving. %Cardei and Du \cite{Cardei:2005:IWS:1160086.1160098} propose a method to efficiently compute the maximum number of disjoint set covers such that each set can monitor all targets. They first transform the problem into a @@ -340,10 +342,10 @@ scheduling strategy. We give a brief answer to these three questions to describe our approach before going into details in the subsequent sections. \begin{itemize} -\item {\bf How must be planned the phases for information exchange, - decision and sensing over time?} Our algorithm divides the time - line into a number of rounds. Each round contains 4 phases: - Information Exchange, Leader Election, Decision, and Sensing. +\item {\bf How must the phases for information exchange, decision and + sensing be planned over time?} Our algorithm divides the time line + into a number of rounds. Each round contains 4 phases: Information + Exchange, Leader Election, Decision, and Sensing. \item {\bf What are the rules to decide which node has to be turned on or off?} Our algorithm tends to limit the overcoverage of points of @@ -369,7 +371,7 @@ sections. decision is made by a leader in each subregion. \end{itemize} -\section{\uppercase{Activity scheduling}} +\section{Activity Scheduling} \label{pd} We consider a randomly and uniformly deployed network consisting of @@ -412,7 +414,7 @@ Election, Decision) are energy consuming for some nodes, even when they do not join the network to monitor the area. Below, we describe each phase in more detail. -\subsection{\textbf INFOrmation Exchange Phase} +\subsection{INFOrmation Exchange Phase} Each sensor node $j$ sends its position, remaining energy $RE_j$, and the number of local neighbors $NBR_j$ to all wireless sensor nodes in @@ -426,18 +428,18 @@ active mode. %The working phase works in rounding fashion. Each round include 3 steps described as follow : -\subsection{\textbf Leader Election Phase} +\subsection{Leader Election Phase} This step includes choosing the Wireless Sensor Node Leader (WSNL) which will be responsible of executing coverage algorithm. Each subregion in the area of interest will select its own WSNL -independently for each round. All the sensor nodes cooperates to +independently for each round. All the sensor nodes cooperate to select WSNL. The nodes in the same subregion will select the leader based on the received information from all other nodes in the same subregion. The selection criteria in order of priority are: larger number of neighbors, larger remaining energy, and then in case of equality, larger index. -\subsection{\textbf Decision Phase} +\subsection{Decision Phase} The WSNL will solve an integer program (see section~\ref{cp}) to select which sensors will be activated in the following sensing phase to cover the subregion. WSNL will send Active-Sleep packet to each @@ -445,12 +447,12 @@ sensor in the subregion based on algorithm's results. %The main goal in this step after choosing the WSNL is to produce the best representative active nodes set that will take the responsibility of covering the whole region $A^k$ with minimum number of sensor nodes to prolong the lifetime in the wireless sensor network. For our problem, in each round we need to select the minimum set of sensor nodes to improve the lifetime of the network and in the same time taking into account covering the region $A^k$ . We need an optimal solution with tradeoff between our two conflicting objectives. %The above region coverage problem can be formulated as a Multi-objective optimization problem and we can use the Binary Particle Swarm Optimization technique to solve it. -\subsection{\textbf Sensing Phase} +\subsection{Sensing Phase} Active sensors in the round will execute their sensing task to preserve maximal coverage in the region of interest. We will assume that the cost of keeping a node awake (or sleep) for sensing task is the same for all wireless sensor nodes in the network. Each sensor -will receive an Active-Sleep packet from WSNL telling him to stay +will receive an Active-Sleep packet from WSNL informing it to stay awake or go sleep for a time equal to the period of sensing until starting a new round. @@ -537,7 +539,7 @@ $X_{13}=( p_x + R_s * (0), p_y + R_s * (\frac{-\sqrt{2}}{2})) $. \label{fig2} \end{figure} -\section{\uppercase{Coverage problem formulation}} +\section{Coverage Problem Formulation} \label{cp} %We can formulate our optimization problem as energy cost minimization by minimize the number of active sensor nodes and maximizing the coverage rate at the same time in each $A^k$ . This optimization problem can be formulated as follow: Since that we use a homogeneous wireless sensor network, we will assume that the cost of keeping a node awake is the same for all wireless sensor nodes in the network. We can define the decision parameter $X_j$ as in \eqref{eq11}:\\ @@ -566,8 +568,8 @@ indicator function of whether the point $p$ is covered, that is: \end{array} \right. %\label{eq12} \end{equation} -The number of sensors that are covering point $p$ is equal to -$\sum_{j \in J} \alpha_{jp} * X_{j}$ where: +The number of active sensors that cover the primary point $p$ is equal +to $\sum_{j \in J} \alpha_{jp} * X_{j}$ where: \begin{equation} X_{j} = \left \{ \begin{array}{l l} @@ -580,7 +582,7 @@ We define the Overcoverage variable $\Theta_{p}$ as: \begin{equation} \Theta_{p} = \left \{ \begin{array}{l l} - 0 & \mbox{if point $p$ is not covered,}\\ + 0 & \mbox{if point $p$ is not covered,}\\ \left( \sum_{j \in J} \alpha_{jp} * X_{j} \right)- 1 & \mbox{otherwise.}\\ \end{array} \right. \label{eq13} @@ -615,23 +617,24 @@ X_{j} \in \{0,1\}, &\forall j \in J \right. \end{equation} \begin{itemize} -\item $X_{j}$ : indicates whether or not sensor $j$ is actively +\item $X_{j}$ : indicates whether or not the sensor $j$ is actively sensing in the round (1 if yes and 0 if not); \item $\Theta_{p}$ : {\it overcoverage}, the number of sensors minus - one that are covering point $p$; + one that are covering the primary point $p$; \item $U_{p}$ : {\it undercoverage}, indicates whether or not point $p$ is being covered (1 if not covered and 0 if covered). \end{itemize} -The first group of constraints indicates that some point $p$ should be -covered by at least one sensor and, if it is not always the case, -overcoverage and undercoverage variables help balance the restriction -equation by taking positive values. There are two main objectives. -First we limit overcoverage of primary points in order to activate a -minimum number of sensors. Second we prevent that parts of the -subregion are not monitored by minimizing undercoverage. The weights -$w_\theta$ and $w_U$ must be properly chosen so as to guarantee that -the maximum number of points are covered during each round. +The first group of constraints indicates that some primary point $p$ +should be covered by at least one sensor and, if it is not always the +case, overcoverage and undercoverage variables help balance the +restriction equation by taking positive values. There are two main +objectives. First we limit overcoverage of primary points in order to +activate a minimum number of sensors. Second we prevent that parts of +the subregion are not monitored by minimizing undercoverage. The +weights $w_\theta$ and $w_U$ must be properly chosen so as to +guarantee that the maximum number of points are covered during each +round. %In equation \eqref{eq15}, there are two main objectives: the first one using the Overcoverage parameter to minimize the number of active sensor nodes in the produced final solution vector $X$ which leads to improve the life time of wireless sensor network. The second goal by using the Undercoverage parameter to maximize the coverage in the region by means of covering each primary point in $SSET^k$.The two objectives are achieved at the same time. The constraint which represented in equation \eqref{eq16} refer to the coverage function for each primary point $P_p$ in $SSET^k$ , where each $P_p$ should be covered by %at least one sensor node in $A^k$. The objective function in \eqref{eq15} involving two main objectives to be optimized simultaneously, where optimal decisions need to be taken in the presence of trade-offs between the two conflicting main objectives in \eqref{eq15} and this refer to that our coverage optimization problem is a multi-objective optimization problem and we can use the BPSO to solve it. The concept of Overcoverage and Undercoverage inspired from ~\cite{Fernan12} but we use it with our model as stated in subsection \ref{Sensing Coverage Model} with some modification to be applied later by BPSO. @@ -658,22 +661,24 @@ the maximum number of points are covered during each round. %\end{itemize} -\section{\uppercase{Simulation Results}} +\section{Simulation Results} \label{exp} -In this section, we conducted a series of simulations, to evaluate the +In this section, we conducted a series of simulations to evaluate the efficiency and relevance of our approach, using the discrete event simulator OMNeT++ \cite{varga}. We performed simulations for five different densities varying from 50 to 250~nodes. Experimental results were obtained from randomly generated networks in which nodes are -deployed over a $(50 \times 25)~m^2 $ sensing field. For each network -deployment, we assume that the deployed nodes can fully cover the -sensing field with the given sensing range. 10 simulation runs are -performed with different network topologies for each node density. -The results presented hereafter are the average of these 10 runs. A -simulation ends when all the nodes are dead or the sensor network -becomes disconnected (some nodes may not be able to sent to a base -station an event they sense). +deployed over a $(50 \times 25)~m^2 $ sensing field. +More precisely, the deployment is controlled at a coarse scale in + order to ensure that the deployed nodes can fully cover the sensing + field with the given sensing range. +10~simulation runs are performed with +different network topologies for each node density. The results +presented hereafter are the average of these 10 runs. A simulation +ends when all the nodes are dead or the sensor network becomes +disconnected (some nodes may not be able to sent to a base station an +event they sense). Our proposed coverage protocol uses the radio energy dissipation model defined by~\cite{HeinzelmanCB02} as energy consumption model for each @@ -683,7 +688,7 @@ range 24-60~joules, and each sensor node will consume 0.2 watts during the sensing period which will have a duration of 60 seconds. Thus, an active node will consume 12~joules during sensing phase, while a sleeping node will use 0.002 joules. Each sensor node will not -participate in the next round if it's remaining energy is less than 12 +participate in the next round if its remaining energy is less than 12 joules. In all experiments the parameters are set as follows: $R_s=5m$, $w_{\Theta}=1$, and $w_{U}=|P^2|$. @@ -711,7 +716,7 @@ number of rounds on the average coverage ratio for 150 deployed nodes for the three approaches. It can be seen that the three approaches give similar coverage ratios during the first rounds. From the 9th~round the coverage ratio decreases continuously with the simple -heuristic, while the other two strategies provide superior coverage to +heuristic, while the two other strategies provide superior coverage to $90\%$ for five more rounds. Coverage ratio decreases when the number of rounds increases due to dead nodes. Although some nodes are dead, thanks to strategy~1 or~2, other nodes are preserved to ensure the @@ -791,40 +796,31 @@ expected, the Strategy with One Leader is usually slightly better than the second strategy, because the global optimization permit to turn off more sensors. Indeed, when there are two subregions more nodes remain awake near the border shared by them. Note that again as the -number of rounds increase the two leader strategy becomes the most +number of rounds increases the two leader strategy becomes the most performing, since its takes longer to have the two subregion networks simultaneously disconnected. -\subsection{The Network Lifetime} +\subsection{The Number of Stopped Simulation Runs} -We have defined the network lifetime as the time until all nodes have -been drained of their energy or each sensor network monitoring a area -becomes disconnected. In figure~\ref{fig6}, the network lifetime for -different network sizes and for the three approaches is illustrated. +We will now study the number of simulation which stopped due to +network disconnection, per round for each of the three approaches. +Figure~\ref{fig6} illustrates the average number of stopped simulation +runs per round for 150 deployed nodes. It can be observed that the +heuristic is the approach which stops the earlier because the nodes +are chosen randomly. Among the two proposed strategies, the +centralized one first exhibits network disconnection. Thus, as +explained previously, in case of the strategy with several subregions +the optimization effectively continues as long as a network in a +subregion is still connected. This longer partial coverage +optimization participates in extending the lifetime. \begin{figure}[h!] -%\centering -% \begin{multicols}{6} \centering -\includegraphics[scale=0.5]{TheNetworkLifetime.eps} %\\~ ~ ~(a) -\caption{The Network Lifetime } +\includegraphics[scale=0.55]{TheNumberofStoppedSimulationRuns150.eps} +\caption{The Number of Stopped Simulation Runs against Rounds for 150 deployed nodes } \label{fig6} \end{figure} -As highlighted by figure~\ref{fig6}, the network lifetime obviously -increases when the size of the network increase, with our approaches -that lead to the larger lifetime improvement. By choosing for each -round the well suited nodes to cover the region of interest and by -leaving sleep the other ones to be used later in next rounds, both -proposed strategies efficiently prolong the lifetime. Comparison shows -that the larger the sensor number, the more our strategies outperform -the heuristic. Strategy~2, which uses two leaders, is the best one -because it is robust to network disconnection in one subregion. It -also means that distributing the algorithm in each node and -subdividing the sensing field into many subregions, which are managed -independently and simultaneously, is the most relevant way to maximize -the lifetime of a network. - \subsection{The Energy Consumption} In this experiment, we study the effect of the multi-hop communication @@ -863,21 +859,22 @@ A sensor node has limited energy resources and computing power, therefore it is important that the proposed algorithm has the shortest possible execution time. The energy of a sensor node must be mainly used for the sensing phase, not for the pre-sensing ones. -Table~\ref{table1} gives the average execution times on a laptop of -the decision phase during one round. They are given for the different -approaches and various numbers of sensors. The lack of any -optimization explains why the heuristic has very low execution times. -Conversely, the Strategy with One Leader which requires to solve an -optimization problem considering all the nodes presents redhibitory -execution times. Moreover, increasing of 50~nodes the network size -multiplies the time by almost a factor of 10. The Strategy with Two -Leaders has more suitable times. We think that in distributed fashion -the solving of the optimization problem in a subregion can be tackled -by sensor nodes. Overall, to be able deal with very large networks a +Table~\ref{table1} gives the average execution times in seconds +on a laptop of the decision phase (solving of the optimization problem) +during one round. They are given for the different approaches and +various numbers of sensors. The lack of any optimization explains why +the heuristic has very low execution times. Conversely, the Strategy +with One Leader which requires to solve an optimization problem +considering all the nodes presents redhibitory execution times. +Moreover, increasing of 50~nodes the network size multiplies the time +by almost a factor of 10. The Strategy with Two Leaders has more +suitable times. We think that in distributed fashion the solving of +the optimization problem in a subregion can be tackled by sensor +nodes. Overall, to be able deal with very large networks a distributed method is clearly required. \begin{table}[ht] -\caption{The Execution Time(s) vs The Number of Sensors } +\caption{The Execution Time(s) vs The Number of Sensors} % title of Table \centering @@ -886,7 +883,7 @@ distributed method is clearly required. % centered columns (4 columns) \hline %inserts double horizontal lines -Sensors Number & Strategy & Strategy & Simple Heuristic \\ [0.5ex] +Sensors Number & Strategy~1 & Strategy~2 & Simple Heuristic \\ [0.5ex] & (with Two Leaders) & (with One Leader) & \\ [0.5ex] %Case & Strategy (with Two Leaders) & Strategy (with One Leader) & Simple Heuristic \\ [0.5ex] % inserts table @@ -911,28 +908,41 @@ Sensors Number & Strategy & Strategy & Simple Heuristic \\ [0.5ex] % is used to refer this table in the text \end{table} -\subsection{The Number of Stopped Simulation Runs} +\subsection{The Network Lifetime} -Finally, we will study the number of simulation which stopped due to -network disconnection, per round for each of the three approaches. -Figure~\ref{fig8} illustrates the number of stopped simulation runs -per round for 150 deployed nodes. It can be observed that the -heuristic is the approach which stops the earlier because the nodes -are chosen randomly. Among the two proposed strategies, the -centralized one first exhibits network disconnection. Thus, as -explained previously, in case of the strategy with several subregions -the optimization effectively continues as long as a network in a -subregion is still connected. This longer partial coverage -optimization participates in extending the lifetime. +Finally, we have defined the network lifetime as the time until all +nodes have been drained of their energy or each sensor network +monitoring an area becomes disconnected. In figure~\ref{fig8}, the +network lifetime for different network sizes and for both Strategy +with Two Leaders and the Simple Heuristic is illustrated. + We do not consider anymore the centralized Strategy with One + Leader, because, as shown above, this strategy results in execution + times that quickly become unsuitable for a sensor network. \begin{figure}[h!] +%\centering +% \begin{multicols}{6} \centering -\includegraphics[scale=0.55]{TheNumberofStoppedSimulationRuns150.eps} -\caption{The Number of Stopped Simulation Runs against Rounds for 150 deployed nodes } +\includegraphics[scale=0.5]{TheNetworkLifetime.eps} %\\~ ~ ~(a) +\caption{The Network Lifetime } \label{fig8} \end{figure} -\section{\uppercase{Conclusions and Future Works}} +As highlighted by figure~\ref{fig8}, the network lifetime obviously +increases when the size of the network increase, with our approach +that leads to the larger lifetime improvement. By choosing for each +round the well suited nodes to cover the region of interest and by +letting the other ones sleep in order to be used later in next rounds, +our strategy efficiently prolongs the lifetime. Comparison shows that +the larger the sensor number is, the more our strategies outperform +the Simple Heuristic. Strategy~2, which uses two leaders, is the best +one because it is robust to network disconnection in one subregion. It +also means that distributing the algorithm in each node and +subdividing the sensing field into many subregions, which are managed +independently and simultaneously, is the most relevant way to maximize +the lifetime of a network. + +\section{Conclusions and Future Works} \label{sec:conclusion} In this paper, we have addressed the problem of coverage and lifetime @@ -951,18 +961,22 @@ subregion. The network lifetime in each subregion is divided into rounds, each round consists of four phases: (i) Information Exchange, (ii) Leader Election, (iii) an optimization-based Decision in order to select the nodes remaining active for the last phase, and (iv) -Sensing. The simulations results show the relevance of the proposed +Sensing. The simulations results show the relevance of the proposed protocol in terms of lifetime, coverage ratio, active sensors Ratio, energy saving, energy consumption, execution time, and the number of stopped simulation runs due to network disconnection. Indeed, when dealing with large and dense wireless sensor networks, a distributed approach like the one we propose allows to reduce the difficulty of a -single global optimization problem by partitioning it in many smaller -problems, one per subregion, that can be solved more easily. In -future, we plan to study and propose a coverage protocol which +single global optimization problem by partitioning it in many smaller +problems, one per subregion, that can be solved more easily. + +In future, we plan to study and propose a coverage protocol which computes all active sensor schedules in a single round, using optimization methods such as swarms optimization or evolutionary -algorithms. The computation of all cover sets in one round is far more +algorithms. This single round will still consists of 4 phases, but the + decision phase will compute the schedules for several sensing phases + which aggregated together define a kind of meta-sensing phase. +The computation of all cover sets in one round is far more difficult, but will reduce the communication overhead. % use section* for acknowledgement