X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/Sensornets15.git/blobdiff_plain/5442cc8a9128527983d0c1f0f7259531eca9330b..refs/heads/master:/Example.tex diff --git a/Example.tex b/Example.tex index b2b473f..a314c77 100644 --- a/Example.tex +++ b/Example.tex @@ -56,19 +56,8 @@ email: ali.idness@edu.univ-fcomte.fr,\\ $\lbrace$karine.deschinkel, michel.salom scheduling performed by each elected leader. This two-step process takes place periodically, in order to choose a small set of nodes remaining active for sensing during a time slot. Each set is built to ensure coverage at a low - energy cost, allowing to optimize the network lifetime. -%More precisely, a - %period consists of four phases: (i)~Information Exchange, (ii)~Leader - %Election, (iii)~Decision, and (iv)~Sensing. The decision process, which -% results in an activity scheduling vector, is carried out by a leader node -% through the solving of an integer program. -% MODIF - BEGIN - Simulations are conducted using the discret event simulator - OMNET++. We refer to the characterictics of a Medusa II sensor for - the energy consumption and the computation time. In comparison with - two other existing methods, our approach is able to increase the WSN - lifetime and provides improved coverage performance. } -% MODIF - END + energy cost, allowing to optimize the network lifetime. Simulations are conducted using the discrete event simulator OMNET++. We refer to the characterictics of a Medusa II sensor for the energy consumption and the computation time. In comparison with two other existing methods, our approach is able to increase the WSN lifetime and provides improved coverage performances. } + %\onecolumn @@ -86,15 +75,15 @@ challenges in WSNs, which has been addressed by a large amount of literature during the last few years, is the design of energy efficient approaches for coverage and connectivity~\cite{conti2014mobile}. Coverage reflects how well a sensor field is monitored. On the one hand we want to monitor the area of -interest in the most efficient way~\cite{Nayak04}, \textcolor{blue}{which means - that we want to maintain the best coverage as long as possible}. On the other +interest in the most efficient way~\cite{Nayak04}, which means + that we want to maintain the best coverage as long as possible. On the other hand we want to use as little energy as possible. Sensor nodes are battery-powered with no means of recharging or replacing, usually due to environmental (hostile or unpractical environments) or cost reasons. Therefore, it is desired that the WSNs are deployed with high densities so as to exploit the overlapping sensing regions of some sensor nodes to save energy by turning off some of them during the sensing phase to prolong the network -lifetime. \textcolor{blue}{A WSN can use various types of sensors such as +lifetime. A WSN can use various types of sensors such as \cite{ref17,ref19}: thermal, seismic, magnetic, visual, infrared, acoustic, and radar. These sensors are capable of observing different physical conditions such as: temperature, humidity, pressure, speed, direction, @@ -102,7 +91,7 @@ lifetime. \textcolor{blue}{A WSN can use various types of sensors such as kinds of objects, and mechanical stress levels on attached objects. Consequently, there is a wide range of WSN applications such as~\cite{ref22}: health-care, environment, agriculture, public safety, military, transportation - systems, and industry applications.} + systems, and industry applications. In this paper we design a protocol that focuses on the area coverage problem with the objective of maximizing the network lifetime. Our proposition, the @@ -127,13 +116,13 @@ framework of the DiLCO approach and the coverage problem formulation. In this paper we made more realistic simulations by taking into account the characteristics of a Medusa II sensor ~\cite{raghunathan2002energy} to measure the energy consumption and the computation time. We have implemented two other -existing \textcolor{blue}{and distributed approaches} (DESK ~\cite{ChinhVu}, and +existing and distributed approaches (DESK ~\cite{ChinhVu}, and GAF ~\cite{xu2001geography}) in order to compare their performances with our -approach. \textcolor{blue}{We focus on DESK and GAF protocols for two reasons. +approach. We focused on DESK and GAF protocols for two reasons. First our protocol is inspired by both of them: DiLCO uses a regular division of the area of interest as in GAF and a temporal division in rounds as in DESK. Second, DESK and GAF are well-known protocols, easy to implement, and - often used as references for comparison}. We also focus on performance + often used as references for comparison. We also focus on performance analysis based on the number of subregions. % MODIF - END @@ -174,7 +163,7 @@ and the set of active sensor nodes is decided at the beginning of each period requirements (e.g. area monitoring, connectivity, power efficiency). For instance, Jaggi et al. \cite{jaggi2006} address the problem of maximizing network lifetime by dividing sensors into the maximum number of disjoint subsets -such that each subset can ensure both coverage and connectivity. A greedy +so that each subset can ensure both coverage and connectivity. A greedy algorithm is applied once to solve this problem and the computed sets are activated in succession to achieve the desired network lifetime. Vu \cite{chin2007}, Padmatvathy et al. \cite{pc10}, propose algorithms working in a @@ -201,13 +190,13 @@ of information can be huge. {\it In order to be suitable for large-scale selecting the active sensors for the current period.} % MODIF - BEGIN -\textcolor{blue}{ Our approach to select the leader node in a subregion is quite + Our approach to select the leader node in a subregion is quite different from cluster head selection methods used in LEACH \cite{DBLP:conf/hicss/HeinzelmanCB00} or its variants \cite{ijcses11}. Contrary to LEACH, the division of the area of interest is supposed to be performed before the leader election. Moreover, we assume that the sensors are deployed almost uniformly and with high density over the area - of interest, such that the division is fixed and regular. As in LEACH, our + of interest, so that the division is fixed and regular. As in LEACH, our protocol works in round fashion. In each round, during the pre-sensing phase, nodes make autonomous decisions. In LEACH, each sensor elects itself to be a cluster head, and each non-cluster head will determine its cluster for the @@ -216,7 +205,7 @@ of information can be huge. {\it In order to be suitable for large-scale account to promote the nodes that have the most energy to become leader. Contrary to the LEACH protocol where all sensors will be active during the sensing-phase, our protocol allows to deactivate a subset of sensors through - an optimization process which reduces significantly the energy consumption.} + an optimization process which significantly reduces the energy consumption. % MODIF - END A large variety of coverage scheduling algorithms has been developed. Many of @@ -277,9 +266,9 @@ less accurate according to the number of primary points. \label{main_idea} \noindent We start by applying a divide-and-conquer algorithm to partition the area of interest into smaller areas called subregions and then our protocol is -executed simultaneously in each subregion. \textcolor{blue}{Sensor nodes are +executed simultaneously in each subregion. Sensor nodes are assumed to be deployed almost uniformly over the region and the subdivision of - the area of interest is regular.} + the area of interest is regular. \begin{figure}[ht!] \centering @@ -328,20 +317,20 @@ and each sensor node will have five possible status in the network: An outline of the protocol implementation is given by Algorithm~\ref{alg:DiLCO} which describes the execution of a period by a node (denoted by $s_j$ for a sensor node indexed by $j$). At the beginning a node checks whether it has -enough energy \textcolor{blue}{(its energy should be greater than a fixed - treshold $E_{th}$)} to stay active during the next sensing phase. If yes, it +enough energy (its energy should be greater than a fixed + treshold $E_{th}$) to stay active during the next sensing phase. If yes, it exchanges information with all the other nodes belonging to the same subregion: it collects from each node its position coordinates, remaining energy ($RE_j$), -ID, and the number of one-hop neighbors still alive. \textcolor{blue}{INFO - packet contains two parts: header and data payload. The sensor ID is included +ID, and the number of one-hop neighbors still alive. INFO + packet contains two parts: header and payload data. The sensor ID is included in the header, where the header size is 8 bits. The data part includes position coordinates (64 bits), remaining energy (32 bits), and the number of one-hop live neighbors (8 bits). Therefore the size of the INFO packet is 112 - bits.} Once the first phase is completed, the nodes of a subregion choose a + bits. Once the first phase is completed, the nodes of a subregion choose a leader to take the decision based on the following criteria with decreasing importance: larger number of neighbors, larger remaining energy, and then in case of equality, larger index. After that, if the sensor node is leader, it -will solve an integer program (see Section~\ref{cp}). \textcolor{blue}{This +will solve an integer program (see Section~\ref{cp}). This integer program contains boolean variables $X_j$ where ($X_j=1$) means that sensor $j$ will be active in the next sensing phase. Only sensors with enough remaining energy are involved in the integer program ($J$ is the set of all @@ -353,7 +342,7 @@ will solve an integer program (see Section~\ref{cp}). \textcolor{blue}{This send an ActiveSleep packet to each sensor in the same subregion to indicate it if it has to be active or not. Otherwise, if the sensor is not the leader, it will wait for the ActiveSleep packet to know its state for the coming sensing - phase.} + phase. %which provides a set of sensors planned to be %active in the next sensing phase. @@ -447,11 +436,11 @@ X_{j} \in \{0,1\}, &\forall j \in J \end{array} \right. \end{equation} -The objective function is a weighted sum of overcoverage and undercoverage. The goal is to limit the overcoverage in order to activate a minimal number of sensors while simultaneously preventing undercoverage. \textcolor{blue}{ By +The objective function is a weighted sum of overcoverage and undercoverage. The goal is to limit the overcoverage in order to activate a minimal number of sensors while simultaneously preventing undercoverage. By choosing $w_{U}$ much larger than $w_{\theta}$, the coverage of a maximum of primary points is ensured. Then for the same number of covered primary points, the solution with a minimal number of active sensors is - preferred. } + preferred. %Both weights $w_\theta$ and $w_U$ must be carefully chosen in %order to guarantee that the maximum number of points are covered during each %period. @@ -897,7 +886,7 @@ the Labex ACTION program (contract ANR-11-LABX-01-01). %\vfill \bibliographystyle{plain} {\small -\bibliography{Example}} +\bibliography{biblio}} %\vfill \end{document}