X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/JournalMultiRounds.git/blobdiff_plain/3dd98144e8d6f95ea84f80f0c173013c27308ffb..c48058fb972ae360e7cd3b759862db6055e16cb5:/elsarticle-template-num.tex diff --git a/elsarticle-template-num.tex b/elsarticle-template-num.tex index 6f0b38f..63e8018 100644 --- a/elsarticle-template-num.tex +++ b/elsarticle-template-num.tex @@ -119,20 +119,19 @@ Optimization, Scheduling. \end{frontmatter} \section{Introduction} - -\indent The fast developments in the low-cost sensor devices and -wireless communications have allowed the emergence the WSNs. WSN +\indent In the last years, there has been increasing development in wireless networking, +Micro-Electro-Mechanical Systems (MEMS), and embedded computing technologies, which are led to construct low-cost, small-sized and low-power sensor nodes that can perform detection, computation and data communication of surrounding environment. WSN includes a large number of small, limited-power sensors that can sense, process and transmit data over a wireless communication. They communicate with each other by using multi-hop wireless communications, cooperate together to monitor the area of interest, and the measured data can be reported to a monitoring center called sink for analysis it~\cite{Sudip03}. There are several applications used the WSN including health, home, environmental, military, and industrial -applications~\cite{Akyildiz02}. One of the major scientific research challenges in WSNs, which are addressed by a large number of literature during the last few years is to design energy efficient approches for coverage and connectivity in WSNs~\cite{conti2014mobile}. The coverage problem is one of the +applications~\cite{Akyildiz02}. One of the major scientific research challenges in WSNs, which are addressed by a large number of literature during the last few years is to design energy efficient approches for coverage and connectivity in WSNs~\cite{conti2014mobile}.The coverage problem is one of the fundamental challenges in WSNs~\cite{Nayak04} that consists in monitoring efficiently and continuously -the area of interest. Thelimited energy of sensors represents the main challenge in the WSNs -design~\cite{Sudip03}, where it is difficult to replace and/or recharge their batteries because the the area of interest nature (such -as hostile environments) and the cost. So, it is necessary that a WSN +the area of interest. The limited energy of sensors represents the main challenge in WSNs +design~\cite{Sudip03}, where it is impossible or inconvenient to replace and/or recharge their batteries because the the area of interest nature (such +as remote, hostile or unpractical environments) and the cost. So, it is necessary that a WSN deployed with high density because spatial redundancy can then be exploited to increase the lifetime of the network. However, turn on all the sensor nodes, which monitor the same region at the same time @@ -157,7 +156,7 @@ activation of the sensors for the sensing phase of the current round. The remainder of the paper is organized as follows. The next section % Section~\ref{rw} -reviews the related work in the field. In section~\ref{Pr}, the problem definition and some background are described. Section~\ref{pd} is devoted to +reviews the related work in the field. In section~\ref{prel}, the problem definition and some background are described. Section~\ref{pd} is devoted to the DiLCO Protocol Description. Section~\ref{cp} gives the coverage model formulation, which is used to schedule the activation of sensors. Section~\ref{exp} shows the simulation results obtained using the discrete event simulator OMNeT++ @@ -169,127 +168,25 @@ for future works in Section~\ref{sec:conclusion}. \section{Related works} \label{rw} -\indent This section is dedicated to the various approaches proposed -in the literature for the coverage lifetime maximization problem, -where the objective is to optimally schedule sensors' activities in -order to extend network lifetime in WSNs. Cardei and Wu \cite{cardei2006energy} provide a taxonomy for coverage algorithms in WSNs according to several design choices: -\begin{itemize} -\item Sensors scheduling Algorithms, i.e. centralized or distributed/localized algorithms. -\item The objective of sensor coverage, i.e. to maximize the network lifetime -or to minimize the number of sensors during the sensing period. -\item The homogeneous or heterogeneous nature of the -nodes, in terms of sensing or communication capabilities. -\item The node deployment method, which may be random or deterministic. -\item Additional requirements for energy-efficient -coverage and connected coverage. -\end{itemize} +In this section, we summarize the related works regarding coverage lifetime maximization and scheduling, and distinguish our DiLCO protocol from the works presented in the literature. Many centralized algorithms ~\cite{Slijepcevic01powerefficient, abrams2004set, cardei2005improving, zorbas2010solving, pujari2011high, cardei2005energy, berman04} and distributed algorithms ~\cite{Gallais06,Tian02,Ye03, Zhang05,HeinzelmanCB02, yardibi2010distributed, ChinhVu} for activity scheduling have been proposed in the literature, and based on different assumptions and objectives. +In centralized algorithms, a central controller makes all decisions and distributes the results to sensor nodes. In the distributed algorithms, the decision process is localized in each individual sensor node, and only information from neighboring nodes are used for the activity decision. - The independency in the cover set (i.e. whether the cover sets are disjoint or non-disjoint) \cite{zorbas2010solving} is another design choice that can be added to the above -list. - -\subsection{Centralized Approaches} -%{\bf Centralized approaches} -The major approach is -to divide/organize the sensors into a suitable number of set covers -where each set completely covers an interest region and to activate -these set covers successively. The centralized algorithms always provide nearly or close to optimal solution since the algorithm has global view of the whole network. However, its advantage of -this type of algorithms is that it requires very low processing power from the sensor nodes, which usually have -limited processing capabilities where the schdule of selected sensor nodes will be computed on the base stations and then sent it to the sensor nodes to apply it to monitor the area of interest. - -The first algorithms proposed in the literature consider that the cover -sets are disjoint: a sensor node appears in exactly one of the -generated cover sets. For instance, Slijepcevic and Potkonjak -\cite{Slijepcevic01powerefficient} propose an algorithm, which -allocates sensor nodes in mutually independent sets to monitor an area -divided into several fields. Their algorithm builds a cover set by -including in priority the sensor nodes, which cover critical fields, -that is to say fields that are covered by the smallest number of -sensors. The time complexity of their heuristic is $O(n^2)$ where $n$ -is the number of sensors. Abrams et al.~\cite{abrams2004set} design three approximation -algorithms for a variation of the set k-cover problem, where the -objective is to partition the sensors into covers such that the number -of covers that includes an area, summed over all areas, is maximized. -Their work builds upon previous work -in~\cite{Slijepcevic01powerefficient} and the generated cover sets do -not provide complete coverage of the monitoring zone. -\cite{cardei2005improving} propose a method to efficiently -compute the maximum number of disjoint set covers such that each set -can monitor all targets. They first transform the problem into a -maximum flow problem, which is formulated as a mixed integer -programming (MIP). Then their heuristic uses the output of the MIP to -compute disjoint set covers. Results show that this heuristic -provides a number of set covers slightly larger compared to -\cite{Slijepcevic01powerefficient} but with a larger execution time -due to the complexity of the mixed integer programming resolution. - -Zorbas et al. \cite{zorbas2010solving} presented a centralised greedy +Zorbas et al. \cite{zorbas2010solving} presented a centralised greedy algorithm for the efficient production of both node disjoint -and non-disjoint cover sets. Compared to algorithm's results of Slijepcevic and Potkonjak -\cite{Slijepcevic01powerefficient}, their heuristic produces more -disjoint cover sets with a slight growth rate in execution time. When producing non-disjoint cover sets, both Static-CCF and Dynamic-CCF provide cover sets offering longer network lifetime than those produced by -\cite{cardei2005energy}. Also, they require a smaller number of node participations in order to -achieve these results. - -In the case of non-disjoint algorithms \cite{pujari2011high}, sensors may -participate in more than one cover set. In some cases, this may -prolong the lifetime of the network in comparison to the disjoint -cover set algorithms, but designing algorithms for non-disjoint cover -sets generally induces a higher order of complexity. Moreover, in -case of a sensor's failure, non-disjoint scheduling policies are less -resilient and less reliable because a sensor may be involved in more -than one cover sets. For instance, Cardei et al.~\cite{cardei2005energy} -present a linear programming (LP) solution and a greedy approach to +and non-disjoint cover sets. The algorithm produces more +disjoint cover sets with a slight growth rate in execution time. When producing non-disjoint cover sets, both Static-CCF and Dynamic-CCF provide cover sets offering longer network lifetime and they require a smaller number of node participations in order to achieve these results. + +Cardei et al.~\cite{cardei2005energy} presented a linear programming (LP) solution and a greedy approach to extend the sensor network lifetime by organizing the sensors into a maximal number of non-disjoint cover sets. Simulation results show that by allowing sensors to participate in multiple sets, the network -lifetime increases compared with related -work~\cite{cardei2005improving}. In~\cite{berman04}, the -authors have formulated the lifetime problem and suggested another -(LP) technique to solve this problem. A centralized solution based on the Garg-K\"{o}nemann -algorithm~\cite{garg98}, provably near -the optimal solution, is also proposed. - -\subsection{Distributed approaches} -%{\bf Distributed approaches} -In distributed $\&$ localized coverage algorithms, the required computation to schedule the activity of sensor nodes will be done by the cooperation among the neighbours nodes. These algorithms may require more computation power for the processing by the cooperated sensor nodes but they are more scaleable for large WSNs. Normally, the localized and distributed algorithms result in non-disjoint set covers. - -Some distributed algorithms have been developed -in~\cite{Gallais06,Tian02,Ye03,Zhang05,HeinzelmanCB02, yardibi2010distributed} to perform the -scheduling so as to coverage preservation. Distributed algorithms typically operate in rounds for -a predetermined duration. At the beginning of each round, a sensor -exchanges information with its neighbors and makes a decision to either -remain turned on or to go to sleep for the round. This decision is -basically made on simple greedy criteria like the largest uncovered -area \cite{Berman05efficientenergy}, maximum uncovered targets -\cite{lu2003coverage}. In \cite{Tian02}, the scheduling scheme is divided -into rounds, where each round has a self-scheduling phase followed by -a sensing phase. Each sensor broadcasts a message containing the node ID -and the node location to its neighbors at the beginning of each round. A -sensor determines its status by a rule named off-duty eligible rule, -which tells him to turn off if its sensing area is covered by its -neighbors. A back-off scheme is introduced to let each sensor delay -the decision process with a random period of time, in order to avoid -simultaneous conflicting decisions between nodes and lack of coverage on any area. -\cite{prasad2007distributed} defines a model for capturing -the dependencies between different cover sets and proposes localized -heuristic based on this dependency. The algorithm consists of two -phases, an initial setup phase during which each sensor computes and -prioritizes the covers and a sensing phase during which each sensor -first decides its on/off status, and then remains on or off for the -rest of the duration. - -The authors in \cite{yardibi2010distributed}, are developed a distributed adaptive sleep scheduling algorithm (DASSA) for WSNs with partial coverage. DASSA does not require location information of sensors while maintaining connectivity and satisfying a user defined coverage target. In DASSA, nodes use the residual energy levels and feedback from the sink for scheduling the activity of their neighbors. This feedback mechanism reduces the randomness in scheduling that would otherwise occur due to the absence of location information. - -In \cite{ChinhVu}, the author proposed a novel distributed heuristic, called -Distributed Energy-efficient Scheduling for k-coverage (DESK), which -ensures that the energy consumption among the sensors is balanced and -the lifetime maximized while the coverage requirement is maintained. -This heuristic works in rounds, requires only 1-hop neighbor -information, and each sensor decides its status (active or sleep) -based on the perimeter coverage model proposed in -\cite{Huang:2003:CPW:941350.941367}. -Our Work, which is presented in~\cite{idrees2014coverage} proposed a coverage optimization protocol to improve the lifetime in -heterogeneous energy wireless sensor networks. In this work, the coverage protocol distributed in each sensor node in the subregion but the optimization take place over the the whole subregion. We consider only distributing the coverage protocol over two subregions. +lifetime increases. + +In \cite{he2012leveraging}, the authors proposed efficient centralized and distributed truncated greedy to improve the coverage and lifetime in WSNs by exploiting temporal-spatial correlations among sensory data. The basic idea lies in that a sensor node can be turned off safely when its sensory information can be inferred through some prediction methods, like Bayesian inference. + +Zhou et al. \cite{zhou2009variable} have presented a centralized and distributed algorithms to conserve energy by exploiting redundancy in the network. In particular, they are addressed the problem of constructing a connected sensor cover in a sensor network model wherein each sensor can adjust its sensing and transmission range. + Wang et al. \cite{wang2009parallel} are focused on the energy-efficient coverage optimization problem of WSNs. Based on the models of coverage and energy, stationary nodes are partitioned into clusters by entropy clustering and then a parallel particle swarm optimization is implemented by the cluster heads to maximize the coverage area and minimize the communication energy in each cluster. They are combined the maximum entropy clustering and parallel optimization, in which the stationary and mobile nodes can be organized to achieve energy efficiency of WSNs. +In \cite{yan2008design}, the authors have proposed a monitoring service for sensor networks based on a distributed energy-efficient sensing coverage protocol. Each node is able to dynamically decide it's schedule to guarantee a certain degree of coverage with average energy consumption inversely proportional to the node density. The works presented in \cite{Bang, Zhixin, Zhang} focuses on a Coverage-Aware, Distributed Energy- Efficient and distributed clustering methods respectively, which aims to extend the network lifetime, while the coverage is ensured. S. Misra et al. \cite{Misra} proposed a localized algorithm for @@ -304,50 +201,66 @@ Greedy Algorithm (DTGA) to solve it. They take advantage from both temporal and spatial correlations between data sensed by different sensors, and leverage prediction, to improve the lifetime. -In \cite{xu2001geography}, Xu et al. proposed an algorithm, called Geographical Adaptive Fidelity (GAF), which uses geographic location information to divide the area of interest into fixed square grids. Within each grid, it keeps only one node staying awake to take the responsibility of sensing and communication. +In \cite{ChinhVu}, the authors proposed a novel distributed heuristic, called +Distributed Energy-efficient Scheduling for k-coverage (DESK), which +ensures that the energy consumption among the sensors is balanced and +the lifetime maximized while the coverage requirement is maintained. +This heuristic works in rounds, requires only 1-hop neighbor +information, and each sensor decides its status (active or sleep) +based on the perimeter coverage model proposed in +\cite{Huang:2003:CPW:941350.941367}. Our Work, which is presented in~\cite{idrees2014coverage} proposed a coverage optimization protocol to improve the lifetime in heterogeneous energy wireless sensor networks. In this work, the coverage protocol distributed in each sensor node in the subregion but the optimization take place over the the whole subregion. We consider only distributing the coverage protocol over two subregions. + In \cite{xu2001geography}, Xu et al. proposed an algorithm, called Geographical Adaptive Fidelity (GAF), which uses geographic location information to divide the area of interest into fixed square grids. Within each grid, it keeps only one node staying awake to take the responsibility of sensing and communication. + + The work in~\cite{esnaashari2010learning} proposed SALA, a scheduling algorithm based on learning automata, to deal with the problem of dynamic point coverage. In SALA each node in the network is equipped with a set of learning automata. The learning automata residing in each node try to learn the maximum sleep duration for the node in such a way that the detection rate of target points by the node does not degrade dramatically. + + In~\cite{misra2011connectivity}, They are addressed the problem of network coverage and connectivity and proposed an efficient solution to maintain coverage, while preserving the connectivity of the network. The proposed solution aims to cover the area of interest, while minimizing the number of the active nodes. The overlap region between two nodes varies according to the distance between them. If the distance between two nodes +is maximized, the total coverage area of these nodes will also be maximized. Also, to preserve the connectivity of the network, each node should be in the communication range of at least one other node. + +Rizvi et al.~\cite{rizvi2012a1} have investigated the problem of constructing a Connected Dominating Set (CDS) +, which provides better sensing coverage in an energy efficient manner. The have presented a CDS based topology control algorithm, A1, which forms an energy +efficient virtual backbone. They are proven that a single phase topology construction with fewer number of messages lead towards an efficient algorithm. -Some other approaches do not consider a synchronized and predetermined -period of time where the sensors are active or not. Indeed, each -sensor maintains its own timer and its wake-up time is randomized -\cite{Ye03} or regulated \cite{cardei2005maximum} over time. +In ~\cite{tran2009novel}, the authors are defined a maximum sensing coverage region (MSCR) problem and presented a novel gossip-based sensing-coverage-aware algorithm to solve the problem. In this approach, nodes gossip with their neighbors about their sensing coverage region where nodes decide locally to be an active or a sleeping node. In this method, the redundant node can reduce its activities whenever its sensing region is covered by enough neighbors. The main contributions of our DiLCO Protocol can be summarized as follows: -(1) The high coverage ratio, (2) The reduced number of active nodes, (3) The distributed optimization over the subregions in the area of interest, (4) The distributed dynamic leader election at each round based on some priority factors that led to energy consumption balancing among the nodes in the same subregion, (5) The primary point coverage model to represent each sensor node in the network, (6) The activity scheduling based optimization on the subregion, which are based on the primary point coverage model to activate as less number as possible of sensor nodes to take the mission of the coverage in each subregion, (7) The very low energy consumption, (8) The higher network lifetime. -\section{Preliminaries} -\label{Pr} +(1) The high coverage ratio, (2) The reduced number of active nodes, (3) The distributed optimization over the subregions in the area of interest, (4) The distributed dynamic leader election at each round, (5) The primary point coverage model to represent each sensor node in the network, (6) The activity scheduling based optimization on the subregion, which are based on the primary point coverage model to activate as less number as possible of sensor nodes to take the mission of the coverage in each subregion, (7) The energy consumption model (8) The very low energy consumption, (9) The higher network lifetime. + + + +\section{Preliminaries:} +\label{prel} + +There are some design issues, which should be taken into consideration for coverage problem such as: coverage type, deployment method, coverage degree, coverage ratio, activity scheduling, network connectivity and network lifetime ~\cite{wang2011coverage}. \subsection{Coverage Problem} -The most discussed coverage problems in literature can be classified +Coverage reflects how well a sensor field is monitored, is one of +the most important performance metrics to measure WSNs. The most discussed coverage problems in literature can be classified into three types \cite{ghosh2008coverage}\cite{wang2011coverage}: area coverage \cite{mulligan2010coverage}(also called full or blanket -coverage), target coverage \cite{yang2014novel}, and barrier coverage \cite{HeShibo}. An area coverage problem is to find a minimum number of sensors to work, such that each physical point in the area is within the sensing range of at least one working sensor node. +coverage), target coverage \cite{yang2014novel}, and barrier coverage \cite{HeShibo}. An area coverage problem is to find a minimum number of sensors to work, such that each physical point in the area is within the sensing range of at least one working sensor node. Target coverage problem is to cover only a finite number of discrete points called targets. This type of coverage has mainly military -applications. The problem of preventing an intruder from entering a region of interest is referred to as the barrier coverage . Our work will concentrate on the area coverage by design +applications. The problem of preventing an intruder from entering a region of interest is referred to as the barrier coverage . +Our work will concentrate on the area coverage by design and implementation of a strategy, which efficiently selects the active -nodes that must maintain both sensing coverage and network +nodes that must maintain both sensing coverage and network connectivity and at the same time improve the lifetime of the wireless -sensor network. But, requiring that all physical points of the +sensor network. But, requiring that all physical points of the considered region are covered may be too strict, especially where the -sensor network is not dense. Our approach represents an area covered +sensor network is not dense. Our approach represents an area covered by a sensor as a set of primary points and tries to maximize the total -number of primary points that are covered in each round, while +number of primary points that are covered in each round, while minimizing overcoverage (points covered by multiple active sensors simultaneously). +\subsection{Deployment Method} +Deployment reflects how a sensor network is constructed over the sensing field. There are two ways to deploy the sensor nodes over the sensing field, fixed and random. The fixed sensor placement could be used in small sensing field while for a large sensor network, remote and hostile environment might the random sensor placement is +recommended. The deployment of wireless sensor network could be dense or sparse. A dense deployment has a larger number of sensor nodes over the area of interest while sparse deployment has lower number of sensor nodes over the sensing field. The dense deployment method is used in situations where it is very important for every event to be detected or when it is important to have multiple sensors cover an area. Sparse deployment might be used when the cost of the sensors make a dense deployment is very expensive or to achieve maximum coverage using the minimum number of sensor nodes. + +\subsection{Coverage Degree} +Coverage degree refers to the number of sensor nodes, which cover point in the sensing disk model. As the number of sensor nodes, which cover a point increase, the robustness of coverage increases. Coverage degree is represented one of the QoS requirements in WSNs. -\subsection{Network Lifetime} -Various definitions exist for the lifetime of a sensor -network~\cite{die09}. The main definitions proposed in the literature are -related to the remaining energy of the nodes or to the coverage percentage. -The lifetime of the network is mainly defined as the amount -of time during which the network can satisfy its coverage objective (the -amount of time that the network can cover a given percentage of its -area or targets of interest). In this work, we assume that the network -is alive until all nodes have been drained of their energy or the -sensor network becomes disconnected, and we measure the coverage ratio -during the WSN lifetime. Network connectivity is important because an -active sensor node without connectivity towards a base station cannot -transmit information on an event in the area that it monitors. +\subsection{Coverage Ratio} +Coverage ratio refers to how much area of the total area of interest or how many points of the total points in the sensing field, which satisfy the QoS requirement of coverage degree. Coverage ratio can be seen as one of the QoS requirement in WSNs. \subsection{Activity Scheduling } Activity scheduling is to schedule the activation and deac- @@ -365,7 +278,23 @@ of the time intervals to be activated. There are many sensor node scheduling met algorithm during the initialization of each round and group-based sensor node scheduling in which, each node will performs the scheduling algorithm only once after its deployment and after the execution of scheduling algorithm, all nodes will be allocated into different groups. +\subsection{Network Connectivity} +Network connectivity refers to ensure that the WSN connected with the sink. The connected WSN should be guarantee that every sensor node in WSN can send the sensed data to other sensor nodes and to the sink using multihop communication. So, by using the sensing disk coverage model, each sensor node can communicate with each other using the communication range of the sensor node. + +\subsection{Network Lifetime} +Various definitions exist for the lifetime of a sensor +network~\cite{die09}. The main definitions proposed in the literature are +related to the remaining energy of the nodes or to the coverage percentage. +The lifetime of the network is mainly defined as the amount +of time during which the network can satisfy its coverage objective (the +amount of time that the network can cover a given percentage of its +area or targets of interest). In this work, we assume that the network +is alive until all nodes have been drained of their energy or the +sensor network becomes disconnected, and we measure the coverage ratio +during the WSN lifetime. Network connectivity is important because an +active sensor node without connectivity towards a base station cannot +transmit information on an event in the area that it monitors. \section{ The DiLCO Protocol Description} \label{pd} @@ -375,7 +304,6 @@ leader election and sensor activity scheduling for coverage preservation and ene The main features of our DiLCO protocol: i)It divides the area of interest into subregions by using divide-and-conquer concept, ii)It requires only the information of the nodes within the subregion, iii) it divides the network lifetime into rounds, iv)It based on the autonomous distributed decision by the nodes in the subregion to elect the Leader, v)It apply the activity scheduling based optimization on the subregion, vi) it achieves an energy consumption balancing among the nodes in the subregion by selecting different nodes as a leader during the network lifetime, vii) It uses the optimization to select the best representative set of sensors in the subregion by optimize the coverage and the lifetime over the area of interest, viii)It uses our proposed primary point coverage model, which represent the sensing range of the sensor as a set of points, which are used by the our optimization algorithm, ix) It uses a simple energy model that takes communication, sensing and computation energy consumptions into account to evaluate the performance of our Protocol. - \subsection{ Assumptions and Models} We consider a randomly and uniformly deployed network consisting of static wireless sensors. The wireless sensors are deployed in high @@ -454,7 +382,7 @@ $X_{13}=( p_x + R_s * (0), p_y + R_s * (\frac{-\sqrt{2}}{2})) $. \centering \includegraphics[scale=0.20]{fig21.pdf}\\~ ~ ~ ~ ~(a) \includegraphics[scale=0.20]{fig22.pdf}\\~ ~ ~ ~ ~(b) -\includegraphics[scale=0.20]{principles13.eps}\\~ ~ ~ ~ ~(c) +\includegraphics[scale=0.20]{principles13.pdf}\\~ ~ ~ ~ ~(c) %\includegraphics[scale=0.10]{fig25.pdf}\\~ ~ ~(d) %\includegraphics[scale=0.10]{fig26.pdf}\\~ ~ ~(e) %\includegraphics[scale=0.10]{fig27.pdf}\\~ ~ ~(f) @@ -470,7 +398,7 @@ then our coverage protocol will be implemented in each subregion simultaneously. Our DiLCO protocol works in rounds fashion as shown in figure~\ref{fig2}. \begin{figure}[ht!] \centering -\includegraphics[width=95mm]{FirstModel.eps} % 70mm +\includegraphics[width=95mm]{FirstModel.pdf} % 70mm \caption{DiLCO protocol} \label{fig2} \end{figure} @@ -499,12 +427,12 @@ We define two types of packets to be used by our DiLCO protocol. There are four status for each sensor node in the network \begin{enumerate}[(a)] -\item LISTENING: Sensor has not yet decided. -\item ACTIVE: Sensor is active. -\item SLEEP: Sensor decides to turn off. -\item COMMUNICATION: Sensor is Transmitting or Receiving packet. +\item LISTENING: Sensor is waiting for a decision (to be active or not) +\item COMPUTATION: Sensor applies the optimization process as leader +\item ACTIVE: Sensor is active +\item SLEEP: Sensor is turned off +\item COMMUNICATION: Sensor is transmitting or receiving packet \end{enumerate} - Below, we describe each phase in more details. \subsubsection{Information Exchange Phase} @@ -740,8 +668,9 @@ round. \section{Simulation Results and Analysis} \label{exp} -In this section, we conducted a series of simulations to evaluate the -efficiency and the relevance of our approach, using the discrete event +\subsection{Simulation framework, energy consumption model and performance metrics} +In this subsection, we conducted a series of simulations to evaluate the +efficiency and the relevance of our protocol DiLCO, using the discrete event simulator OMNeT++ \cite{varga}. The simulation parameters are summarized in Table~\ref{table3} @@ -766,10 +695,10 @@ Sensing Field & $(50 \times 25)~m^2 $ \\ %\hline Nodes Number & 50, 100, 150, 200 and 250~nodes \\ %\hline -Initial Energy & 50-75~joules \\ +Initial Energy & 500-700~joules \\ %\hline -Sensing Period & 20 Minutes \\ -$E_{thr}$ & 12.2472 Joules\\ +Sensing Period & 60 Minutes \\ +$E_{thr}$ & 36 Joules\\ $R_s$ & 5~m \\ %\hline $w_{\Theta}$ & 1 \\ @@ -782,17 +711,19 @@ $w_{U}$ & $|P^2|$ % is used to refer this table in the text \end{table} -A simulation -ends when all the nodes are dead or the sensor network becomes -disconnected (some nodes may not be able to send, to a base station, an -event they sense). -Our proposed coverage protocol uses a simple energy model defined by~\cite{ChinhVu} that based on ~\cite{raghunathan2002energy} with some modification as energy consumption model for each wireless sensor node in the network and for all the simulations. +25 simulation runs are performed with different network topologies. The results presented hereafter are the average of these 25 runs. +We performed simulations for five different densities varying from 50 to 250~nodes. Experimental results are obtained from randomly generated networks in which nodes are deployed over a $(50 \times 25)~m^2 $ sensing field. More precisely, the deployment is controlled at a coarse scale in order to ensure that the deployed nodes can cover the sensing field with the given sensing range.\\ + +Our DiLCO protocol is declined into five versions: DiLCO-2, DiLCO-4, DiLCO-8, DiLCO-16, and DiLCO-32, corresponding to $2$, $4$, $8$, $16$ or $32$ subregions (leaders). -The modification is to add the energy consumption for receiving the packets as well as we ignore the part that related to the sensing range because we used fixed sensing range. The new energy consumption model will take inro account the energy consumption for communication (packet transmission/reception), data sensing and computational energy. +We use an energy consumption model proposed by~\cite{ChinhVu} and based on ~\cite{raghunathan2002energy} with slight modifications. +The energy consumption for sending/receiving the packets is added whereas the part related to the sensing range is removed because we consider a fixed sensing range. +% We are took into account the energy consumption needed for the high computation during executing the algorithm on the sensor node. +%The new energy consumption model will take into account the energy consumption for communication (packet transmission/reception), the radio of the sensor node, data sensing, computational energy of Micro-Controller Unit (MCU) and high computation energy of MCU. +%revoir la phrase -There are four subsystems in each sensor node that consume energy: the micro-controller -unit (MCU) subsystem which is capable of computation, communication subsystem which is responsible for -transmitting/receiving messages, sensing subsystem that collects data, and the powe suply which supplies power to the complete sensor node ~\cite{raghunathan2002energy}. In our model, we will concentrate on first three main subsystems and each subsystem can be turned on or off depending on the current status of the sensor which is summarized in Table~\ref{table4}. +For our energy consumption model, we refer to the sensor node (Medusa II) which uses Atmels AVR ATmega103L microcontroller~\cite{raghunathan2002energy}. The typical architecture of a sensor is composed of four subsystems : the MCU subsystem which is capable of computation, communication subsystem (radio) which is responsible for +transmitting/receiving messages, sensing subsystem that collects data, and the power supply which powers the complete sensor node ~\cite{raghunathan2002energy}. Each of the first three subsystems can be turned on or off depending on the current status of the sensor. Energy consumption (expressed in milliWatt per second) for the different status of the sensor is summarized in Table~\ref{table4}. The energy needed to send or receive a 1-bit is equal to $0.2575 mW$. \begin{table}[ht] \caption{The Energy Consumption Model} @@ -803,7 +734,7 @@ transmitting/receiving messages, sensing subsystem that collects data, and the p % centered columns (4 columns) \hline %inserts double horizontal lines -Sensor mode & MCU & Radio & Sensing & Power (mW) \\ [0.5ex] +Sensor mode & MCU & Radio & Sensing & Power (mWs) \\ [0.5ex] \hline % inserts single horizontal line Listening & ON & ON & ON & 20.05 \\ @@ -813,7 +744,9 @@ Active & ON & OFF & ON & 9.72 \\ \hline Sleep & OFF & OFF & OFF & 0.02 \\ \hline - \multicolumn{4}{|c|}{Energy needed to send/receive a 1-bit} & 0.2575\\ +Computation & ON & ON & ON & 26.83 \\ +%\hline +%\multicolumn{4}{|c|}{Energy needed to send/receive a 1-bit} & 0.2575\\ \hline \end{tabular} @@ -821,255 +754,215 @@ Sleep & OFF & OFF & OFF & 0.02 \\ % is used to refer this table in the text \end{table} -For the simplicity, we ignore the energy needed to turn on the -radio, to start up the sensor node, the transition from mode to another, etc. We also do not consider the need of collecting sensing data. Thus, when a sensor becomes active (i.e., it already decides it status), it can turn its radio off to save battery. Since our couverage optimization protocol uses two types of the packets, the size of the INFO-Packet and Status-Packet are 112 bits and 16 bits respectively. The value of energy spent to send a message shown in Table~\ref{table4} is obtained by using the equation in ~\cite{raghunathan2002energy} to calculate the energy cost for transmitting messages and we propose the same value for receiving the packets. +For sake of simplicity we ignore the energy needed to turn on the +radio, to start up the sensor node, the transition from mode to another, etc. +%We also do not consider the need of collecting sensing data. PAS COMPRIS +Thus, when a sensor becomes active (i.e., it already decides it's status), it can turn its radio off to save battery. DiLCO protocol uses two types of packets for communication. The size of the INFO-Packet and Status-Packet are 112 bits and 24 bits respectively. +The value of energy spent to send a 1-bit-content message is obtained by using the equation in ~\cite{raghunathan2002energy} to calculate the energy cost for transmitting messages and we propose the same value for receiving the packets. +The initial energy of each node is randomly set in the interval $[500-700]$. Each sensor node will not participate in the next round if its remaining energy is less than $E_{th}=36 Joules$, the minimum energy needed for the node to stay alive during one round. This value has been computed by multiplying the energy consumed in active state (9.72 mWs) by the time in second for one round (3600 seconds). According to the interval of initial energy, a sensor may be alive during at most 20 rounds.\\ -We performed simulations for five different densities varying from 50 to 250~nodes. Experimental results -were obtained from randomly generated networks in which nodes are -deployed over a $(50 \times 25)~m^2 $ sensing field. More precisely, the deployment is controlled at a coarse scale in order to ensure that the deployed nodes can fully cover the sensing - field with the given sensing range. -The energy of each node in a network is initialized randomly within the -range 50-75~joules. Each sensor node will not participate in the next round if its remaining energy is less than $E_{thr}$, the minimum energy needed for the node to stay alive during one round. - -In the simulations, we introduce the following performance metrics to -evaluate the efficiency of our approach: +In the simulations, we introduce the following performance metrics to evaluate the efficiency of our approach: \begin{enumerate}[i)] -\item {Coverage Ratio (CR):} the coverage ratio measures how much the area of a sensor field is covered. In our case, we treated the sensing fields as a grid, and used each grid point as a sample point +\item {{\bf Coverage Ratio (CR)}:} the coverage ratio measures how much the area of a sensor field is covered. In our case, we treated the sensing fields as a grid, and used each grid point as a sample point for calculating the coverage. The coverage ratio can be calculated by: \begin{equation*} \scriptsize -\mbox{CR}(\%) = \frac{\mbox{$n$}}{\mbox{$N$}} \times 100. +\mbox{CR}(\%) = \frac{\mbox{$n^t$}}{\mbox{$N$}} \times 100. \end{equation*} -Where: $n$ is the Number of Covered Grid points by the Active Sensors of the all subregions of the network during the current sensing phase and $N$ is total number of grid points in the sensing field of the network. -The accuracy of this method depends on the distance between grids. In our -simulations, the sensing field has been divided into 50 by 25 grid points, which means -there are $51 \times 26~ = ~ 1326$ points in total. Therefore, for our simulations, the error in the coverage calculation is less than ~ 1 $\% $. +Where: $n^t$ is the number of covered grid points by the active sensors of all subregions during round $t$ in the current sensing phase and $N$ is total number of grid points in the sensing field of the network. +%The accuracy of this method depends on the distance between grids. In our +%simulations, the sensing field has been divided into 50 by 25 grid points, which means +%there are $51 \times 26~ = ~ 1326$ points in total. +% Therefore, for our simulations, the error in the coverage calculation is less than ~ 1 $\% $. -\item{ Number of Active Sensors Ratio(ASR):} It is important to have as few active nodes as possible in each round, +\item{{\bf Number of Active Sensors Ratio(ASR)}:} It is important to have as few active nodes as possible in each round, in order to minimize the communication overhead and maximize the -network lifetime.The Active Sensors Ratio is defined as follows: +network lifetime. The Active Sensors Ratio is defined as follows: \begin{equation*} \scriptsize -\mbox{ASR}(\%) = \sum\limits_{r=1}^R \left( \frac{\mbox{$A_r$}}{\mbox{$S$}} \times 100 \right) . +\mbox{ASR}(\%) = \frac{\sum\limits_{r=1}^R \mbox{$A_r^t$}}{\mbox{$S$}} \times 100 . \end{equation*} -Where: $A_r$ is the number of active sensors in the subregion $r$ during the current sensing phase, $S$ is the total number of sensors in the network, and $R$ is the total number of the subregions in the network. +Where: $A_r^t$ is the number of active sensors in the subregion $r$ during round $t$ in the current sensing phase, $S$ is the total number of sensors in the network, and $R$ is the total number of the subregions in the network. -\item {Energy Saving Ratio(ESR):} is defined by: -\begin{equation*} -\scriptsize -\mbox{ESR}(\%) = \sum\limits_{r=1}^R \left( \frac{\mbox{${ES}_r$}}{\mbox{$S$}} \times 100 \right) . -\end{equation*} -Where: ${ES}_r$ is the number of alive sensors in subregion $r$ during this round. The longer the ratio is, the more redundant sensor nodes are switched off, and consequently the longer the network may live. +\item {{\bf Network Lifetime}:} we define the network lifetime as the time until the coverage ratio drops below a predefined threshold. We denoted by $Lifetime95$ (respectively $Lifetime50$) as the amount of time during which the network can satisfy an area coverage greater than $95\%$ (repectively $50\%$). We assume that the network +is alive until all nodes have been drained of their energy or the +sensor network becomes disconnected . Network connectivity is important because an +active sensor node without connectivity towards a base station cannot +transmit information on an event in the area that it monitors. + -\item {Energy Consumption:} +\item {{\bf Energy Consumption}:} - Energy Consumption (EC) can be seen as the total energy consumed by the sensors during the lifetime of the network divided by the total number of rounds. The EC can be computed as follow: \\ + Energy Consumption (EC) can be seen as the total energy consumed by the sensors during the $Lifetime95$ or $Lifetime50$ divided by the number of rounds. The EC can be computed as follow: \\ \begin{equation*} \scriptsize -\mbox{EC} = \frac{\mbox{$\sum\limits_{d=1}^D \left( E^c_d + E^l_d + E^a_d + E^s_d \right)$ }}{\mbox{$D$}} . +\mbox{EC} = \frac{\mbox{$\sum\limits_{d=1}^D \left( E^c_d + E^l_d + E^a_d + E^s_d + E^p_d \right)$ }}{\mbox{$D$}} . \end{equation*} + %\begin{equation*} %\scriptsize %\mbox{EC} = \frac{\mbox{$\sum\limits_{d=1}^D E^c_d$}}{\mbox{$D$}} + \frac{\mbox{$\sum\limits_{d=1}^D %E^l_d$}}{\mbox{$D$}} + \frac{\mbox{$\sum\limits_{d=1}^D E^a_d$}}{\mbox{$D$}} + %\frac{\mbox{$\sum\limits_{d=1}^D E^s_d$}}{\mbox{$D$}}. %\end{equation*} -Where: D is the total number of rounds. -The total energy consumed by the sensors (EC) comes through taking into consideration four main energy factors, which are $E^c_d$, $E^l_d$, $E^a_d$, and $E^s_d$. -The factor $E^c_d$ represents the energy consumption resulting from wireless communications is calculated by taking into account the energy spent by all the nodes when transmitting and -receiving packets during round $d$. The $E^l_d$ represents the energy consumed by all the sensors during the listening mode before taking the decision to go Active or Sleep in round $d$. The $E^a_d$ and $E^s_d$ are refered to enegy consumed by the turned on and turned off sensors in the period of sensing during the round $d$. +Where: D is the number of rounds during $Lifetime95$ or $Lifetime50$. +The total energy consumed by the sensors (EC) comes through taking into consideration four main energy factors, which are $E^c_d$, $E^l_d$, $E^a_d$, $E^s_d$ and $E^p_d$. +The energy consumption $E^c_d$ for wireless communications is calculated by taking into account the energy spent by all the nodes while transmitting and +receiving packets during round $d$. The $E^l_d$ represents the energy consumed by all the sensors during the listening mode before taking the decision to go Active or Sleep in round $d$. $E^a_d$ and $E^s_d$ refer to energy consumed in the active mode or in the sleeping mode. The $E^p_d$ refers to energy consumed by the computation (processing) to solve the integer program. + +%\item {Network Lifetime:} we have defined the network lifetime as the time until all +%nodes have been drained of their energy or each sensor network monitoring an area has become disconnected. -\item {Network Lifetime:} we have defined the network lifetime as the time until all -nodes have been drained of their energy or each sensor network monitoring an area has become disconnected. -\item {Execution Time:} a sensor node has limited energy resources and computing power, +\item {{\bf Execution Time}:} a sensor node has limited energy resources and computing power, therefore it is important that the proposed algorithm has the shortest possible execution time. The energy of a sensor node must be mainly used for the sensing phase, not for the pre-sensing ones. -\item {The number of stopped simulation runs:} we will study the percentage of simulations, which are stopped due to network disconnections per round. +\item {{\bf Stopped simulation runs}:} A simulation +ends when the sensor network becomes +disconnected (some nodes are dead and are not able to send information to the base station). We report the number of simulations that are stopped due to network disconnections and for which round it occurs. \end{enumerate} + + \subsection{Performance Comparison for differnet subregions} \label{sub1} -In this subsection, we will study the performance of our approach for a different number of subregions (Leaders). -10~simulation runs are performed with different network topologies for each node density. The results presented hereafter are the average of these 10 runs. -Our approach are called strategy 1 ( With 1 Leader), strategy 2 ( With 2 Leaders), strategy 3 ( With 4 Leaders), and strategy 4 ( With 8 Leaders), strategy 5 ( With 16 Leaders) and strategy 6 ( With 32 Leaders). The strategy 1 ( With 1 Leader) is a centralized approach on all the area of the interest, while strategy 2 ( With 2 Leaders), strategy 3 ( With 4 Leaders), strategy 4 ( With 8 Leaders), strategy 5 ( With 16 Leaders) and strategy 6 ( With 32 Leaders) are distributed on two, four, eight, sixteen, and thirty-two subregions respectively. -\subsubsection{The impact of the number of rounds on the coverage ratio} -In this experiment, Figure~\ref{fig3} shows the impact of the -number of rounds on the average coverage ratio for 150 deployed nodes -for the four strategies. +In this subsection, we are studied the performance of our DiLCO protocol for a different number of subregions (Leaders). +The DiLCO-1 protocol is a centralized approach on all the area of the interest, while DiLCO-2, DiLCO-4, DiLCO-8, DiLCO-16 and DiLCO-32 are distributed on two, four, eight, sixteen, and thirty-two subregions respectively. We did not take the DiLCO-1 protocol in our simulation results because it need high execution time to give the decision leading to consume all it's energy before producing the solution for optimization problem. + +\subsubsection{Coverage Ratio} +In this experiment, Figure~\ref{fig3} shows the average coverage ratio for 150 deployed nodes. \parskip 0pt \begin{figure}[h!] \centering - \includegraphics[scale=0.43] {R1/CR.eps} + \includegraphics[scale=0.5] {R1/CR.pdf} \caption{The impact of the number of rounds on the coverage ratio for 150 deployed nodes} \label{fig3} \end{figure} -It can be seen that the six strategies -give nearly similar coverage ratios during the first three rounds. -As shown in the figure ~\ref{fig3}, when we increase the number of sub-regions, It will leads to cover the area of interest for a larger number of rounds. Coverage ratio decreases when the number of rounds increases due to dead nodes. Although some nodes are dead, -thanks to strategy~5 and strategy~6, other nodes are preserved to ensure the -coverage. Moreover, when we have a dense sensor network, it leads to -maintain the full coverage for a larger number of rounds. Strategy~5 and strategy~6 are -slightly more efficient than other strategies, because they subdivides -the area of interest into 16~subregions and 32~subregions if one of the subregions becomes -disconnected, the coverage may be still ensured in the remaining subregions. - -\subsubsection{The impact of the number of rounds on the active sensors ratio} - Figure~\ref{fig4} shows the average active nodes ratio versus the number of rounds -for 150 deployed nodes. +It can be seen that the DiLCO protocol (with 4, 8, 16 and 32 subregions) gives nearly similar coverage ratios during the first thirty rounds. +DiLCO-2 protocol gives near similar coverage ratio with other ones for first 10 rounds and then decreased until the died of the network in the round $18^{th}$ because it consume more energy with the effect of the network disconnection. +As shown in the figure ~\ref{fig3}, as the number of subregions increases , the coverage preservation for area of interest increases for a larger number of rounds. Coverage ratio decreases when the number of rounds increases due to dead nodes. Although some nodes are dead, +thanks to DiLCO-8, DiLCO-16 and DiLCO-32 protocols, other nodes are preserved to ensure the coverage. Moreover, when we have a dense sensor network, it leads to maintain the coverage for a larger number of rounds. DiLCO-8, DiLCO-16 and DiLCO-32 protocols are +slightly more efficient than other protocols, because they subdivides +the area of interest into 8, 16 and 32~subregions if one of the subregions becomes disconnected, the coverage may be still ensured in the remaining subregions. + +\subsubsection{Active Sensors Ratio} + Figure~\ref{fig4} shows the average active nodes ratio for 150 deployed nodes. \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R1/ASR.eps} +\includegraphics[scale=0.5]{R1/ASR.pdf} \caption{The impact of the number of rounds on the active sensors ratio for 150 deployed nodes } \label{fig4} \end{figure} - -The results presented in figure~\ref{fig4} show the superiority of -the proposed strategy~5 and strategy~6, in comparison with the other strategies. The -strategy with less number of leaders uses less active nodes than the other strategies, which uses a more number of leaders until the last rounds, because it uses central control on -the larger area of the sensing field. The advantage of the strategy~5 and strategy~6 are -that even if a network is disconnected in one subregion, the other ones -usually continues the optimization process, and this extends the lifetime of the network. - -\subsubsection{The impact of the number of rounds on the energy saving ratio} -In this experiment, we consider a performance metric linked to energy. Figure~\ref{fig5} shows the average energy saving ratio versus number of rounds for all six strategies and for 150 deployed nodes. -\begin{figure}[h!] -\centering -\includegraphics[scale=0.5]{R1/ESR.eps} -\caption{The impact of the number of rounds on the energy saving ratio for 150 deployed nodes} -\label{fig5} -\end{figure} - -The simulation results show that our strategies allow to efficiently -save energy by turning off some sensors during the sensing phase. As -expected, the strategy~5 and strategy~6 are usually slightly better than -the other strategies, because the distributed optimization on larger number of subregions permits to minimize the energy needed for communication and It led to save more energy obviously. Indeed, when there are more than one subregion more nodes remain awake near the border shared by them but the energy consumed by these nodes have no effect in comparison with the energy consumed by the communication. Note that again as the number of rounds increases the strategy~5 and strategy~6 becomes the most performing one, since it takes longer to have the Sixteen or Thirty-two subregion networks simultaneously disconnected. +The results presented in figure~\ref{fig4} show the increase in the number of subregions led to increase in the number of active nodes. The DiLCO-16 and DiLCO-32 protocols have a larger number of active nodes but it preserve the coverage for a larger number of rounds. The advantage of the DiLCO-16, and DiLCO-32 protocols are that even if a network is disconnected in one subregion, the other ones usually continues the optimization process, and this extends the lifetime of the network. \subsubsection{The percentage of stopped simulation runs} -Figure~\ref{fig6} illustrates the percentage of stopped simulation -runs per round for 150 deployed nodes. +Figure~\ref{fig6} illustrates the percentage of stopped simulation runs per round for 150 deployed nodes. \begin{figure}[h!] \centering -\includegraphics[scale=0.43]{R1/SR.eps} +\includegraphics[scale=0.43]{R1/SR.pdf} \caption{The percentage of stopped simulation runs compared to the number of rounds for 150 deployed nodes } \label{fig6} \end{figure} -It can be observed that the strategy~1 is the approach which stops first because it apply the centralized control on all the area of interest that is why it is first exhibits network disconnections. Thus, as explained previously, in case of the strategy~5 and strategy~6 with several subregions the optimization effectively continues as long as a network in a subregion is still connected. This longer partial coverage optimization participates in extending the network lifetime. + +It can be observed that the DiLCO-2 is the approach which stops first because it applied the optimization on only two subregions for the area of interest that is why it is first exhibits network disconnections. +Thus, as explained previously, in case of the DiLCO-16 and DiLCO-32 with several subregions the optimization effectively continues as long as a network in a subregion is still connected. This longer partial coverage optimization participates in extending the network lifetime. \subsubsection{The Energy Consumption} -In this experiment, we study the effect of the energy consumed by the sensors during the communication, listening, active, and sleep modes for different network densities. Figure~\ref{fig7} illustrates the energy consumption for the different -network sizes and for the four proposed stratgies. +We measure the energy consumed by the sensors during the communication, listening, computation, active, and sleep modes for different network densities and compare it for different subregions. Figures~\ref{fig95} and ~\ref{fig50} illustrate the energy consumption for different network sizes for $Lifetime95$ and $Lifetime50$. + \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R1/EC.eps} -\caption{The Energy Consumption} -\label{fig7} +\includegraphics[scale=0.5]{R1/EC95.pdf} +\caption{The Energy Consumption for Lifetime95} +\label{fig95} \end{figure} -The results show that the strategy with eight leaders is the most competitive from the energy -consumption point of view. The other strategies have a high energy consumption due to many -communications as well as the energy consumed during the listening before taking the decision. In fact, a distributed method on the subregions greatly reduces the number of communications and the time of listening so thanks to the partitioning of the initial -network in several independent subnetworks. - -\subsubsection{The impact of the number of sensors on execution time} -In this experiment, we study the the impact of the size of the network on the excution time of the our distributed optimization approach. Table~\ref{table1} gives the average execution times in seconds for the decision phase (solving of the optimization problem) during one round. They are given for the different approaches and various numbers of sensors. We can see from Table~\ref{table1}, that the strategy~6 has very low execution times in comparison with other strategies, because it distributed on larger number of small subregions. Conversely, the strategy~1 which requires to solve an optimization problem considering all the nodes presents high execution times. -%Moreover, increasing the network size by 50~nodes multiplies the time by almost a factor of 10. -The strategy~6 has more suitable times. We think that in distributed fashion the solving of the optimization problem in a subregion can be tackled by sensor nodes. Overall, to be able to deal with very large networks, a distributed method is clearly required. -\begin{table}[ht] -\caption{The Execution Time(s) vs The Number of Sensors} -% title of Table +The results show that DiLCO-16 and DiLCO-32 are the most competitive from the energy consumption point of view but as the network size increase the energy consumption increase compared with DiLCO-2, DiLCO-4 and DiLCO-8. The other approaches have a high energy consumption due to the energy consumed during the different modes of the sensor node.\\ + +As shown in Figures~\ref{fig95} and ~\ref{fig50} , DiLCO-2 consumes more energy than the other versions of DiLCO, especially for large sizes of network. This is easy to understand since the bigger the number of sensors involved in the integer program, the larger the time computation to solve the optimization problem as well as the higher energy consumed during the communication. +\begin{figure}[h!] \centering -% used for centering table -\begin{tabular}{|c|c|c|c|c|c|} -%\begin{tcolorbox}[tab2,tabularx={X|Y|Y|Y|Y|Y|Y}] -% centered columns (4 columns) - \hline -%inserts double horizontal lines -\cellcolor[gray]{0.8} Strategy & \multicolumn{5}{|c|}{\cellcolor[gray]{0.8} The Number of Sensors } \\ - \cellcolor[gray]{0.8} Name &\cellcolor[gray]{0.8} 50 & \cellcolor[gray]{0.8} 100 & \cellcolor[gray]{0.8} 150 & \cellcolor[gray]{0.8} 200 & \cellcolor[gray]{0.8} 250 \\ [0.5ex] -\hline\hline -% inserts single horizontal line -\cellcolor[gray]{0.8} Strategy~1 & 0.1848 & 1.8957 & 12.2119 & 152.2581 & 1542.5396 \\ -\hline -\cellcolor[gray]{0.8} Strategy~2 & 0.0466 & 0.2190 & 0.6323 & 2.2853 & 5.6561 \\ -\hline +\includegraphics[scale=0.5]{R1/EC50.pdf} +\caption{The Energy Consumption for Lifetime50} +\label{fig50} +\end{figure} +In fact, a distributed method on the subregions greatly reduces the number of communications, the time of listening and computation so thanks to the partitioning of the initial network in several independent subnetworks. -\cellcolor[gray]{0.8} Strategy~3 & 0.0118 & 0.0445 & 0.0952 & 0.1849 & 0.3148 \\ -\hline +\subsubsection{Execution Time} +In this experiment, we study the the impact of the size of the network on the excution time of the our distributed optimization approach. Figure~\ref{fig8} gives the average execution times in seconds for the decision phase (solving of the optimization problem) during one round. They are given for the different approaches and various numbers of sensors. +The original execution time is computed on a laptop DELL with intel Core i3 2370 M (2.4 GHz) processor (2 cores) and the MIPS (Million Instructions Per Second) rate equal to 35330. To be consistent with the use of a sensor node with Atmels AVR ATmega103L microcontroller (6 MHz) and a MIPS rate equal to 6 to run the optimization resolution, this time is multiplied by 2944.2 $\left( \frac{35330}{2} \times 6\right)$ and reported on Figure~\ref{fig8} for different network sizes. -\cellcolor[gray]{0.8} Strategy~4 & 0.0041 & 0.0127 & 0.0271 & 0.0484 & 0.0723 \\ -\hline +\begin{figure}[h!] +\centering +\includegraphics[scale=0.5]{R1/T.pdf} +\caption{Execution Time (in seconds)} +\label{fig8} +\end{figure} -\cellcolor[gray]{0.8} Strategy~5 & 0.0025 & 0.0037 & 0.0061 & 0.0083 & 0.0126 \\ -\hline -\cellcolor[gray]{0.8} Strategy~6 & 0.0008 & 0.0022 & 0.0022 & 0.0032 & 0.0035 \\ -\hline -%inserts single line -\end{tabular} -%\end{tcolorbox} -\label{table1} -% is used to refer this table in the text -\end{table} +We can see from figure~\ref{fig8}, that the DiLCO-32 has very low execution times in comparison with other DiLCO versions, because it distributed on larger number of small subregions. Conversely, the DiLCO-2 which requires to solve an optimization problem considering half the nodes in each subregion presents high execution times. + +The DiLCO-32 has more suitable times in the same time it turn on redundent nodes more. We think that in distributed fashion the solving of the optimization problem in a subregion can be tackled by sensor nodes. Overall, to be able to deal with very large networks, a distributed method is clearly required. \subsubsection{The Network Lifetime} -Finally, in figure~\ref{fig8}, the -network lifetime for different network sizes and for the four strategies is illustrated. +In figure~\ref{figLT95} and \ref{figLT50}, network lifetime, $Lifetime95$ and $Lifetime50$ respectively, are illustrated for different network sizes. + \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R1/LT.eps} -\caption{The Network Lifetime } -\label{fig8} +\includegraphics[scale=0.5]{R1/LT95.pdf} +\caption{The Network Lifetime for $Lifetime95$} +\label{figLT95} \end{figure} -We see that the strategy 1 results in execution times that quickly become unsuitable for a sensor network as well as the energy consumed during the communication seems to be huge because it used a centralised control on the all the area of interest. +We see that the DiLCO-2 results in execution times that quickly become unsuitable for a sensor network as well as the energy consumed during the communication seems to be huge because it is distributed over only two subregions. -As highlighted by figure~\ref{fig8}, the network lifetime obviously -increases when the size of the network increases, with our approach strategy~6 +As highlighted by figures~\ref{figLT95} and \ref{figLT50}, the network lifetime obviously +increases when the size of the network increases, with our DiLCO-16 protocol that leads to the larger lifetime improvement. By choosing the best suited nodes, for each round, to cover the area of interest and by letting the other ones sleep in order to be used later in next rounds, -our strategy~6 efficiently prolonges the network lifetime. Comparison shows that -the Strategy~6, which uses 32 leaders, is the best one because it is robust to network disconnection during the network lifetime. It also means that distributing the protocol in each node and -subdividing the sensing field into many subregions, which are managed independently and simultaneously, is the most relevant way to maximize the lifetime of a network. +our DiLCO-16 protocol efficiently extends the network lifetime because the benefit from the optimization with 16 subregions is better than the DiLCO-32 with 32 subregion. DilCO-32 protocol puts in active mode a larger number of sensor nodes especially near the bordes of the subdivisions. +Comparison shows that the DiLCO-16 protocol, which uses 16 leaders, is the best one because it is used less number of active nodes during the network lifetime compared with DiLCO-32. It also means that distributing the protocol in each node and subdividing the sensing field into many subregions, which are managed independently and simultaneously, is the most relevant way to maximize the lifetime of a network. +\begin{figure}[h!] +\centering +\includegraphics[scale=0.5]{R1/LT50.pdf} +\caption{The Network Lifetime for $Lifetime50$} +\label{figLT50} +\end{figure} -\subsection{Performance Comparison for Different Primary Point Models} +\subsection{Performance Study for Primary Point Models} \label{sub2} -Based on the results, which are conducted in subsection~\ref{sub1}, we will study the performance of the Strategy~4 approach for a different primary point models. The objective of this comparison is to select the suitable primary point model to be used by our DiLCO protocol. -50~simulation runs are performed with different network topologies for each node density. The results presented hereafter are the average of these 50 runs. -In this comparisons, our approaches are called Model~1( With 5 Primary Points), Model~2 ( With 9 Primary Points), Model~3 ( With 13 Primary Points), Model~4 ( With 17 Primary Points), and Model~5 ( With 21 Primary Points). The simulation will applied with strategy~4 by subdividing the area of interest into eight subregions and distribute our strategy~4 approach on the all subregions. - -\subsubsection{The impact of the number of rounds on the coverage ratio} -In this experiment, we Figure~\ref{fig33} shows the impact of the -number of rounds on the average coverage ratio for 150 deployed nodes -for the four strategies. +In this subsection, we are studied the performance of the DiLCO~16 approach for a different primary point models. The objective of this comparison is to select the suitable primary point model to be used by our DiLCO protocol. + +In this comparisons, our DiLCO-16 protocol are used with five models which are called Model~1( With 5 Primary Points), Model~2 ( With 9 Primary Points), Model~3 ( With 13 Primary Points), Model~4 ( With 17 Primary Points), and Model~5 ( With 21 Primary Points). +\subsubsection{Coverage Ratio} +In this experiment, we Figure~\ref{fig33} shows the average coverage ratio for 150 deployed nodes. \parskip 0pt \begin{figure}[h!] \centering - \includegraphics[scale=0.5] {R2/CR.eps} + \includegraphics[scale=0.5] {R2/CR.pdf} \caption{The impact of the number of rounds on the coverage ratio for 150 deployed nodes} \label{fig33} \end{figure} -It is shown that all models provides a very near coverage ratios during the first twelve rounds, with very small superiority for the models with higher number of primary points. +It is shown that all models provides a very near coverage ratios during the network lifetime, with very small superiority for the models with higher number of primary points. Moreover, when the number of rounds increases, coverage ratio produced by Model~3, Model~4 and Model~5 decreases in comparison with Model~1 and Model~2 due to the high energy consumption during the listening to take the decision after finishing optimization process for larger number of primary points. As shown in figure ~\ref{fig33}, Coverage ratio decreases when the number of rounds increases due to dead nodes. Although some nodes are dead, -thanks to Model~2, which is slightly more efficient than other Models, because Model~2 balances between the number of rounds and the better coverage ratio in comparison with other Models. +thanks to Model~2, which is slightly more efficient than other Models, because it is balanced between the number of rounds and the better coverage ratio in comparison with other Models. -\subsubsection{The impact of the number of rounds on the active sensors ratio} - Figure~\ref{fig44} shows the average active nodes ratio versus the number of rounds -for 150 deployed nodes. +\subsubsection{Active Sensors Ratio} + Figure~\ref{fig44} shows the average active nodes ratio for 150 deployed nodes. \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R2/ASR.eps} +\includegraphics[scale=0.5]{R2/ASR.pdf} \caption{The impact of the number of rounds on the active sensors ratio for 150 deployed nodes } \label{fig44} \end{figure} @@ -1078,49 +971,46 @@ The results presented in figure~\ref{fig44} show the superiority of the proposed Model 1, in comparison with the other Models. The model with less number of primary points uses less active nodes than the other models, which uses a more number of primary points to represent the area of the sensor. According to the results that presented in figure~\ref{fig33}, we observe that although the Model~1 continue to a larger number of rounds, but it has less coverage ratio compared with other models.The advantage of the Model~2 approach is to use less number of active nodes for each round compared with Model~3, Model~4 and Model~5, and this led to continue for a larger number of rounds with extending the network lifetime. Model~2 has a better coverage ratio compared to Model~1 and acceptable number of rounds. -\subsubsection{The impact of the number of rounds on the energy saving ratio} -In this experiment, we study the effect of increasing primary points on the energy conservation in the wireless sensor network. Figure~\ref{fig55} shows the average Energy Saving Ratio versus number of rounds for all four Models and for 150 deployed nodes. -\begin{figure}[h!] -\centering -\includegraphics[scale=0.5]{R2/ESR.eps} -\caption{The impact of the number of rounds on the energy saving ratio for 150 deployed nodes} -\label{fig55} -\end{figure} -The simulation results show that our Models allow to efficiently -save energy by turning off the redundant sensors during the sensing phase. - As expected, the Model 1 is usually slightly better than the other Models, because it turn on a less number of nodes during the sensing phase in comparison with other models and according to the results, which are observed in figure ~\ref{fig33}, and It led to save more energy obviously. - Indeed, when there are more primary points to represent the area of the sensor leads to activate more nodes to cover them and in the same time ensuring more coverage ratio. From the previous presented results, we see it is preferable to choose the model that balance between the coverage ratio and the number of rounds. The Model~2 becomes the most performing one, since it could apply this requirement where, It can cover the area of interest with a good coverage ratio and for a larger number of rounds prolonging the lifetime of the wireless sensor network. \subsubsection{The percentage of stopped simulation runs} In this study, we want to show the effect of increasing the primary points on the number of stopped simulation runs for each round. Figure~\ref{fig66} illustrates the percentage of stopped simulation runs per round for 150 deployed nodes. + \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R2/SR.eps} +\includegraphics[scale=0.5]{R2/SR.pdf} \caption{The percentage of stopped simulation runs compared to the number of rounds for 150 deployed nodes } \label{fig66} \end{figure} -As shown in Figure~\ref{fig66}, when the number of primary points increase leads to increase the percentage of the stopped simulation runs per rounds and starting from round 19 until the the network is died. The reason behind the increase is the increase in the sensors dead when the primary points increases. We can observe that the Model~1 is a better than other models because it conserve more energy by turn on less number of sensors during the sensing phase, but in the same time it preserve the coverage with a less coverage ratio in comparison with other models. Model~2 seems to be more suitable to be used in wireless sensor networks. + +As shown in Figure~\ref{fig66}, when the number of primary points are increased, the percentage of the stopped simulation runs per rounds is increased. The reason behind the increase is the increase in the sensors dead when the primary points increases. We are observed that the Model~1 is a better than other models because it conserve more energy by turn on less number of sensors during the sensing phase, but in the same time it preserve the coverage with a less coverage ratio in comparison with other models. Model~2 seems to be more suitable to be used in wireless sensor networks. \subsubsection{The Energy Consumption} -In this experiment, we study the effect of increasing the primary points to represent the area of the sensor on the energy consumed by the wireless sensor network for different network densities. Figure~\ref{fig77} illustrates the energy consumption for the different network sizes and for the five proposed Models. +In this experiment, we study the effect of increasing the primary points to represent the area of the sensor on the energy consumed by the wireless sensor network for different network densities. Figures~\ref{fig2EC95} and ~\ref{fig2EC50} illustrate the energy consumption for different network sizes for $Lifetime95$ and $Lifetime50$. +\begin{figure}[h!] +\centering +\includegraphics[scale=0.5]{R2/EC95.pdf} +\caption{The Energy Consumption with $95\%-Lifetime$} +\label{fig2EC95} +\end{figure} + \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R2/EC.eps} -\caption{The Energy Consumption} -\label{fig77} +\includegraphics[scale=0.5]{R2/EC50.pdf} +\caption{The Energy Consumption with $Lifetime50$} +\label{fig2EC50} \end{figure} -We see from the results presented in Figure~\ref{fig77}, The energy consumed by the network for each round increases when the primary points increases, because the decision for optimization process will takes more time leads to consume more energy during the listening mode. The results show that the Model~1 is the most competitive from the energy consumption point of view but the worst one from coverage ratio point of view. The other Models have a high energy consumption due to the increase in the primary points, which are led to increase the energy consumption during the listening mode before taking the optimization decision. In fact, we see that the Model~2 is a good candidate to be used the wireless sensor network because I have a good coverage ratio and a suitable energy consumption in comparison with other models. +We see from the results presented in Figures~\ref{fig2EC95} and \ref{fig2EC50}, The energy consumed by the network for each round increases when the primary points increases, because the decision for optimization process will takes more time leads to consume more energy during the listening mode. The results show that the Model~1 is the most competitive from the energy consumption point of view but the worst one from coverage ratio point of view. The other Models have a high energy consumption due to the increase in the primary points, which are led to increase the energy consumption during the listening mode before producing the solution by solving the optimization process. In fact, we see that the Model~2 is a good candidate to be used by wireless sensor network because It preserve a good coverage ratio and a suitable energy consumption in comparison with other models. -\subsubsection{The impact of the number of sensors on execution time} -In this experiment, we study the the impact of the increase in primary points on the excution time of the our distributed optimization approach. Figure~\ref{figt} gives the average execution times in seconds for the decision phase (solving of the optimization problem) during one round. +\subsubsection{Execution Time} +In this experiment, we study the impact of the increase in primary points on the excution time of our DiLCO protocol. Figure~\ref{figt} gives the average execution times in seconds for the decision phase (solving of the optimization problem) during one round. \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R2/T.eps} +\includegraphics[scale=0.5]{R2/T.pdf} \caption{The Execution Time(s) vs The Number of Sensors } \label{figt} \end{figure} @@ -1130,112 +1020,114 @@ various numbers of sensors. We can see from Figure~\ref{figt}, that the Model~1 Moreover, the Model~2 has more suitable times, coverage ratio, and saving energy ratio leads to continue for a larger number of rounds extending the network lifetime. We think that a good primary point model, this one that balances between the coverage ratio and the number of rounds during the lifetime of the network. \subsubsection{The Network Lifetime} -Finally, we will study the effect of increasing the primary points on the lifetime of the network. In figure~\ref{fig88}, the network lifetime for different network sizes and for the five proposed models is illustrated. +Finally, we will study the effect of increasing the primary points on the lifetime of the network. In Figure~\ref{fig2LT95} and in Figure~\ref{fig2LT50}, network lifetime, $Lifetime95$ and $Lifetime50$ respectively, are illustrated for different network sizes. + +\begin{figure}[h!] +\centering +\includegraphics[scale=0.5]{R2/LT95.pdf} +\caption{The Network Lifetime for $Lifetime95$} +\label{fig2LT95} +\end{figure} + + \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R2/LT.eps} -\caption{The Network Lifetime } -\label{fig88} +\includegraphics[scale=0.5]{R2/LT50.pdf} +\caption{The Network Lifetime for $Lifetime50$} +\label{fig2LT50} \end{figure} -As highlighted by figure~\ref{fig88}, the network lifetime obviously -increases when the size of the network increases, with our approach Model~1 -that leads to the larger lifetime improvement. -Comparison shows that the Model~1, which uses less number of primary points , is the best one because it is less energy consumption during the network lifetime. It is also the worst one from the point of view of coverage ratio. Our proposed Model~2 efficiently prolongs the network lifetime with a good coverage ratio in comparison with other models. + + +As highlighted by figures~\ref{fig2LT95} and \ref{fig2LT50}, the network lifetime obviously +increases when the size of the network increases, with our Model~1 that leads to the larger lifetime improvement. +Comparison shows that the Model~1, which uses less number of primary points , is the best one because it is less energy consumption during the network lifetime. It is also the worst one from the point of view of coverage ratio. Our proposed Model~2 efficiently prolongs the network lifetime with a good coverage ratio in comparison with other models. \subsection{Performance Comparison for Different Approaches} -Based on the results, which are conducted from previous two subsections, ~\ref{sub1} and \ref{sub2}, we found that Our DiLCO protocol with Strategy~5 and Strategy~6 with Model~2 are the best candidate to be compared with other two approches. The first approach, called DESK that proposed by ~\cite{ChinhVu}, which is a full distributed coverage algorithm. The second approach, called GAF ~\cite{xu2001geography}, consists in dividing the region into fixed squares. During the decision phase, in each square, one sensor is -chosen to remain on during the sensing phase time. In this subsection, 50 simulation runs are -performed with different network topologies. The results -presented hereafter are the average of these 50 runs. - -\subsubsection{The impact of the number of rounds on the coverage ratio} -In this experiment, Figure~\ref{fig333} shows the impact of the -number of rounds on the average coverage ratio for 150 deployed nodes -for the three approaches. +Based on the results, which are conducted from previous two subsections, ~\ref{sub1} and \ref{sub2}, we are found that Our DiLCO-16, and DiLCO-32 protocols with Model~2 are the best candidates to be compared with other two approches. The first approach, called DESK that proposed by ~\cite{ChinhVu}, which is a full distributed coverage algorithm. The second approach, called GAF ~\cite{xu2001geography}, consists in dividing the region into fixed squares. During the decision phase, in each square, one sensor is chosen to remain on during the sensing phase time. + +\subsubsection{Coverage Ratio} +In this experiment, Figure~\ref{fig333} shows the average coverage ratio for 150 deployed nodes. + \parskip 0pt \begin{figure}[h!] \centering - \includegraphics[scale=0.45] {R3/CR.eps} -\caption{The impact of the number of rounds on the coverage ratio for 150 deployed nodes} + \includegraphics[scale=0.5] {R3/CR.pdf} +\caption{The coverage ratio for 150 deployed nodes} \label{fig333} \end{figure} It is shown that DESK and GAF provides a -a little better coverage ratio with 99.99\% and 99.92\% against 99.26\% and 99.0\% produced by -our approaches Strategy~5 and Strategy~6 for the lowest number of rounds. -This is due to the fact that our DiLCO protocol with Strategy~5 and Strategy~6 put in sleep mode -redundant sensors using optimization (which lightly decreases the coverage ratio) while there are more nodes are active in the case of DESK and GAF. -Moreover, when the number of rounds increases, coverage ratio produced by DESK and GAF protocols decreases. This is due to dead nodes. However, Our DiLCO protocol with Strategy~5 and Strategy~6 maintains almost full coverage. This is because it optimize the coverage and the lifetime in wireless sensor network by selecting the best representative sensor nodes to take the reponsibilty of coverage during the sensing phase and this will leads to continue for a larger number of rounds and prolonging the network lifetime; although some nodes are dead, sensor activity scheduling of our protocol chooses other nodes to ensure the coverage of the area of interest. +a little better coverage ratio with 99.99\% and 99.91\% against 99.1\% and 99.2\% produced by DiLCO-16 and DiLCO-32 for the lowest number of rounds. This is due to the fact that our DiLCO protocol versions put in sleep mode redundant sensors using optimization (which lightly decreases the coverage ratio) while there are more nodes are active in the case of DESK and GAF. -\subsubsection{The impact of the number of rounds on the active sensors ratio} -It is important to have as few active nodes as possible in each round, -in order to minimize the communication overhead and maximize the -network lifetime. Figure~\ref{fig444} shows the average active nodes ratio versus the number of rounds for 150 deployed nodes. -\begin{figure}[h!] -\centering -\includegraphics[scale=0.5]{R3/ASR.eps} -\caption{The impact of the number of rounds on the active sensors ratio for 150 deployed nodes } -\label{fig444} -\end{figure} +Moreover, when the number of rounds increases, coverage ratio produced by DESK and GAF protocols decreases. This is due to dead nodes. However, Our DiLCO-16 and DiLCO-32 protocols maintains almost a good coverage. This is because it optimize the coverage and the lifetime in wireless sensor network by selecting the best representative sensor nodes to take the reponsibilty of coverage during the sensing phase and this will leads to continue for a larger number of rounds and prolonging the network lifetime; although some nodes are dead, sensor activity scheduling of our protocol chooses other nodes to ensure the coverage of the area of interest. -The results presented in figure~\ref{fig444} show the superiority of -the proposed DiLCO protocol with Strategy~5 and Strategy~6, in comparison with the other approaches. We can observe that DESK and GAF have 37.5 \% and 44.5 \% active nodes and our DiLCO protocol with Strategy~5 and Strategy~6 competes perfectly with only 24.8 \% and 26.8 \% active nodes for the first four rounds. Then as the number of rounds increases our DiLCO protocol with Strategy~5 and Strategy~6 have larger number of active nodes in comparison with DESK and GAF, especially from tenth round because DiLCO gives a better coverage ratio after tenth round than other approaches. We see that the DESK and GAF have less number of active nodes because there are many nodes are died due to the high energy consumption by the redundant nodes during the sensing phase. +\subsubsection{Active Sensors Ratio} +It is important to have as few active nodes as possible in each round, +in order to minimize the energy consumption and maximize the network lifetime. Figure~\ref{fig444} shows the average active nodes ratio for 150 deployed nodes. -\subsubsection{The impact of the number of rounds on the energy saving ratio} -In this experiment, we will perform a comparison study for the performance of our protocol with Strategy~4 with two other approaches from the point of view of energy conservation. Figure~\ref{fig555} shows the average Energy Saving Ratio versus number of rounds for all three approaches and for 150 deployed nodes. \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R3/ESR.eps} -\caption{The impact of the number of rounds on the energy saving ratio for 150 deployed nodes} -\label{fig555} +\includegraphics[scale=0.5]{R3/ASR.pdf} +\caption{The active sensors ratio for 150 deployed nodes } +\label{fig444} \end{figure} -The simulation results show that DESK protocol has -energy saving ratio 100 \% during the first three rounds. After that, the energy saving ratio of DESK decreased obviously during the next rounds due to the died nodes until the died of the network in the $15^{th}$ round. -On the other side, our DiLCO protocol with Strategy~5 and Strategy~6 have the same energy saving ratio 100 \% during the first four rounds. From the $5^{th}$ round to $12^{th}$ round, GAF provides a beter energy saving ratio because it employs a load balancing for energy usage so that all the nodes remain up and running together as long as possible by selecting the node with higher lifetime in each square and at each round, so it success to prolong the lifetime without taking the coverage ratio into account but it postpond the the increase in the dead nodes until the $13^{th}$ round. After that, our DiLCO protocol with Strategy~5 and Strategy~6 allow to efficiently -save energy by turning off the redundant sensors during the sensing phase. As -expected, our DiLCO protocol with with Strategy~5 and Strategy~6 is usually slightly better than the other approaches, because the distributed optimization on the subregions permits to minimize the energy needed for communication as well as turn off all the redundant sensor nodes, which are led to save more energy obviously and increase the lifetime of the network. Note that again as the number of rounds increases, our DiLCO protocol becomes the most performing one, since it is distributed the optimization process on the 16 or 32 subregion networks simultaneously so as to optimize the coverage and the lifetime in the network. +The results presented in figure~\ref{fig444} show the superiority of the proposed DiLCO-16 and DiLCO-32 protocols, in comparison with the other approaches. We can observe that DESK and GAF have 37.5 \% and 44.5 \% active nodes and our DiLCO-16 and DiLCO-32 protocols competes perfectly with only 17.4 \%, 24.8 \% and 26.8 \% active nodes for the first 14 rounds. Then as the number of rounds increases our DiLCO-16 and DiLCO-32 protocols have larger number of active nodes in comparison with DESK and GAF, especially from round $35^{th}$ because they give a better coverage ratio than other approaches. We see that the DESK and GAF have less number of active nodes beginning at the rounds $35^{th}$ and $32^{th}$ because there are many nodes are died due to the high energy consumption by the redundant nodes during the sensing phase. \subsubsection{The percentage of stopped simulation runs} -The results presented in this experiment, is to show the comparison of our DiLCO protocol with Strategy~5 and Strategy~6 with other two approaches from the point of view the stopped simulation runs per round. +The results presented in this experiment, is to show the comparison of our DiLCO-16 and DiLCO-32 protocols with other two approaches from the point of view the stopped simulation runs per round. Figure~\ref{fig666} illustrates the percentage of stopped simulation runs per round for 150 deployed nodes. \begin{figure}[h!] \centering -\includegraphics[scale=0.4]{R3/SR.eps} +\includegraphics[scale=0.5]{R3/SR.pdf} \caption{The percentage of stopped simulation runs compared to the number of rounds for 150 deployed nodes } \label{fig666} \end{figure} -It can be observed that the DESK is the approach, which stops first because it consumes more energy for communication as well as it turn on a large number of redundant nodes during the sensing phase. Our DiLCO protocol with Strategy~5 and Strategy~6 have less stopped simulation runs in comparison with DESK and GAFbecause it distributed the optimization on several subregions in order to optimize the coverage and the lifetime of the network by activating a less number of nodes during the sensing phase leading to extend the network lifetime and coverage preservation.The optimization effectively continues as long as a network in a subregion is still connected. +It can be observed that the DESK is the approach, which stops first because it consumes more energy for communication as well as it turn on a large number of redundant nodes during the sensing phase. Our DiLCO-16 and DiLCO-32 protocols have less stopped simulation runs in comparison with DESK and GAF because it distributed the optimization on several subregions in order to optimize the coverage and the lifetime of the network by activating a less number of nodes during the sensing phase leading to extend the network lifetime and coverage preservation.The optimization effectively continues as long as a network in a subregion is still connected. \subsubsection{The Energy Consumption} -In this experiment, we study the effect of the energy consumed by the wireless sensor network during the communication, listening, active, and sleep modes for different network densities and compare it with other approaches. Figure~\ref{fig777} illustrates the energy consumption for the different -network sizes and for the four approaches. +In this experiment, we study the effect of the energy consumed by the wireless sensor network during the communication, computation, listening, active, and sleep modes for different network densities and compare it with other approaches. Figures~\ref{fig3EC95} and ~\ref{fig3EC50} illustrate the energy consumption for different network sizes for $Lifetime95$ and $Lifetime50$. + +\begin{figure}[h!] +\centering +\includegraphics[scale=0.5]{R3/EC95.pdf} +\caption{The Energy Consumption with $95\%-Lifetime$} +\label{fig3EC95} +\end{figure} + \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R3/EC.eps} -\caption{The Energy Consumption} -\label{fig777} +\includegraphics[scale=0.5]{R3/EC50.pdf} +\caption{The Energy Consumption with $Lifetime50$} +\label{fig3EC50} \end{figure} -The results show that our DiLCO protocol with Strategy~5 and Strategy~6 are the most competitive from the energy consumption point of view. The other approaches have a high energy consumption due to activating a larger number of redundant nodes as well as the energy consumed for communication, active and listening modes. In fact, a distributed method on the subregions greatly reduces the number of communications and the time of listening so thanks to the partitioning of the initial network into several independent subnetworks. + +The results show that our DiLCO-16 and DiLCO-32 protocols are the most competitive from the energy consumption point of view. The other approaches have a high energy consumption due to activating a larger number of redundant nodes as well as the energy consumed during the different modes of sensor nodes. In fact, a distributed method on the subregions greatly reduces the number of communications and the time of listening so thanks to the partitioning of the initial network into several independent subnetworks. \subsubsection{The Network Lifetime} -Finally, In this experiment, we will show the superiority of our DiLCO protocol with Strategy~5 and Strategy~6 against other two approaches in prolonging the network lifetime. In Figure~\ref{fig888}, the -network lifetime for different network sizes and for the four approaches. +In this experiment, we are observed the superiority of our DiLCO-16 and DiLCO-32 protocols against other two approaches in prolonging the network lifetime. In figures~\ref{fig3LT95} and \ref{fig3LT50}, network lifetime, $Lifetime95$ and $Lifetime50$ respectively, are illustrated for different network sizes. + \begin{figure}[h!] \centering -\includegraphics[scale=0.5]{R3/LT.eps} -\caption{The Network Lifetime } -\label{fig888} +\includegraphics[scale=0.5]{R3/LT95.pdf} +\caption{The Network Lifetime for $Lifetime95$} +\label{fig3LT95} +\end{figure} + + +\begin{figure}[h!] +\centering +\includegraphics[scale=0.5]{R3/LT50.pdf} +\caption{The Network Lifetime for $Lifetime50$} +\label{fig3LT50} \end{figure} -As highlighted by figure~\ref{fig888}, the network lifetime obviously -increases when the size of the network increases, with our DiLCO protocol with Strategy~5 and Strategy~6 +As highlighted by figures~\ref{fig3LT95} and \ref{fig3LT50}, the network lifetime obviously +increases when the size of the network increases, with our DiLCO-16 and DiLCO-32 protocols that leads to maximize the lifetime of the network compared with other approaches. -By choosing the best suited nodes, for each round, by optimizing the coverage and lifetime of the network to cover the area of interest and by -letting the other ones sleep in order to be used later in next rounds, our DiLCO protocol with Strategy~5 and Strategy~6 efficiently prolonges the network lifetime. -Comparison shows that our DiLCO protocol with Strategy~5 and Strategy~6, which uses distributed optimization on the subregions, is the best -one because it is robust to network disconnection during the network lifetime as well as it consume less energy in comparison with other approaches. It also means that distributing the algorithm in each node and subdividing the sensing field into many subregions, which are managed +By choosing the best suited nodes, for each round, by optimizing the coverage and lifetime of the network to cover the area of interest and by letting the other ones sleep in order to be used later in next rounds, our DiLCO-16 and DiLCO-32 protocols efficiently prolonges the network lifetime. +Comparison shows that our DiLCO-16 and DiLCO-32 protocols, which are used distributed optimization over the subregions, is the best one because it is robust to network disconnection during the network lifetime as well as it consume less energy in comparison with other approaches. It also means that distributing the algorithm in each node and subdividing the sensing field into many subregions, which are managed independently and simultaneously, is the most relevant way to maximize the lifetime of a network. \section{Conclusion and Future Works} @@ -1245,9 +1137,9 @@ In this paper, we have addressed the problem of the coverage and the lifetime optimization in wireless sensor networks. This is a key issue as sensor nodes have limited resources in terms of memory, energy and computational power. To cope with this problem, the field of sensing -is divided into smaller subregions using the concept of divide-and-conquer method, and then a multi-rounds coverage protocol will optimize coverage and lifetime performances in each subregion. -The proposed protocol combines two efficient techniques: network -leader election and sensor activity scheduling, where the challenges +is divided into smaller subregions using the concept of divide-and-conquer method, and then a DiLCO protocol for optimizing the coverage and lifetime performances in each subregion. +The proposed protocol combines two efficient techniques: network +leader election and sensor activity scheduling, where the challenges include how to select the most efficient leader in each subregion and the best representative active nodes that will optimize the network lifetime while taking the responsibility of covering the corresponding @@ -1256,15 +1148,13 @@ rounds, each round consists of four phases: (i) Information Exchange, (ii) Leader Election, (iii) an optimization-based Decision in order to select the nodes remaining active for the last phase, and (iv) Sensing. The simulations show the relevance of the proposed DiLCO -protocol in terms of lifetime, coverage ratio, active sensors ratio, -energy saving, energy consumption, execution time, and the number of -stopped simulation runs due to network disconnection. Indeed, when +protocol in terms of lifetime, coverage ratio, active sensors ratio, energy consumption, execution time, and the number of stopped simulation runs due to network disconnection. Indeed, when dealing with large and dense wireless sensor networks, a distributed -approach like the one we propose allows to reduce the difficulty of a +approach like the one we are proposed allows to reduce the difficulty of a single global optimization problem by partitioning it in many smaller problems, one per subregion, that can be solved more easily. -In future work, we plan to study and propose a coverage optimization protocol, which +In future work, we plan to study and propose a coverage optimization protocol, which computes all active sensor schedules in one time, using optimization methods. The round will still consist of 4 phases, but the decision phase will compute the schedules for several sensing phases @@ -1277,11 +1167,6 @@ difficult, but will reduce the communication overhead. - - - - - %% \linenumbers %% main text @@ -1308,7 +1193,6 @@ difficult, but will reduce the communication overhead. \end{document} - %%\bibitem{} %\end{thebibliography}