X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/Sensornets15.git/blobdiff_plain/1e8d9ca3bf21ff3289e0673fd5371073cc80fe24..b8c6a5f1e74fdd26f663b1c0dd6454e15e84df73:/Example.tex?ds=inline diff --git a/Example.tex b/Example.tex index 28aa4d4..41f84a1 100644 --- a/Example.tex +++ b/Example.tex @@ -12,7 +12,7 @@ \usepackage{apalike} \usepackage{SCITEPRESS} \usepackage[small]{caption} - +\usepackage{color} \usepackage[linesnumbered,ruled,vlined,commentsnumbered]{algorithm2e} \usepackage{mathtools} @@ -50,14 +50,18 @@ Optimization, Scheduling.} scheduling performed by each elected leader. This two-step process takes place periodically, in order to choose a small set of nodes remaining active for sensing during a time slot. Each set is built to ensure coverage at a low - energy cost, allowing to optimize the network lifetime. More precisely, a - period consists of four phases: (i)~Information Exchange, (ii)~Leader - Election, (iii)~Decision, and (iv)~Sensing. The decision process, which + energy cost, allowing to optimize the network lifetime. + %More precisely, a + %period consists of four phases: (i)~Information Exchange, (ii)~Leader + %Election, (iii)~Decision, and (iv)~Sensing. + The decision process, which results in an activity scheduling vector, is carried out by a leader node - through the solving of an integer program. In comparison with some other - protocols, the simulations done using the discrete event simulator OMNeT++ - show that our approach is able to increase the WSN lifetime and provides - improved coverage performance. } + through the solving of an integer program. + {\color{red} Simulations are conducted using the discret event simulator OMNET++. + We refer to the characterictics of a Medusa II sensor for the energy consumption and the time computation. + In comparison with two other existing methods, our approach is able to increase the WSN lifetime and provides + improved coverage performance. }} + \onecolumn \maketitle \normalsize \vfill @@ -95,7 +99,13 @@ same subregion, in order to choose in a suitable manner a sensor node (the leader) to carry out the coverage strategy. In each subregion the activation of the sensors for the sensing phase of the current period is obtained by solving an integer program. The resulting activation vector is broadcast by a leader -to every node of its subregion. +to every node of its subregion. + +{\color{red} Our previous paper ~\cite{idrees2014coverage} relies almost exclusively on the framework of the DiLCO approach and the coverage problem formulation. +In this paper we strengthen our simulations by taking into account the characteristics of a Medusa II sensor ~\cite{raghunathan2002energy} to measure the energy consumption and the computation time. +We have implemented two other existing approaches (a distributed one DESK ~\cite{ChinhVu} and a centralized one GAF ~\cite{xu2001geography}) in order to compare their performances with our approach. +We also focus on performance analysis based on the number of subregions. } + The remainder of the paper continues with Section~\ref{sec:Literature Review} where a review of some related works is presented. The next section describes @@ -323,6 +333,68 @@ Active-Sleep packet to know its state for the coming sensing phase. \section{\uppercase{Coverage problem formulation}} \label{cp} +{\color{red} +We formulate the coverage optimization problem with an integer program. +The objective function consists in minimizing the undercoverage and the overcoverage of the area as suggested in \cite{pedraza2006}. +The area coverage problem is transformed to the coverage of a fraction of points called primary points. +Details on the choice and the number of primary points can be found in \cite{idrees2014coverage}. The set of primary points is denoted by $P$ +and the set of sensors by $J$. As we consider a boolean disk coverage model, we use the boolean indicator $\alpha_{jp}$ which is equal to 1 if the primary point $p$ is in the sensing range of the sensor $j$. The binary variable $X_j$ represents the activation or not of the sensor $j$. So we can express the number of active sensors that cover the primary point $p$ by $\sum_{j \in J} \alpha_{jp} * X_{j}$. We deduce the overcoverage denoted by $\Theta_p$ of the primary point $p$ : +\begin{equation} + \Theta_{p} = \left \{ +\begin{array}{l l} + 0 & \mbox{if the primary point}\\ + & \mbox{$p$ is not covered,}\\ + \left( \sum_{j \in J} \alpha_{jp} * X_{j} \right)- 1 & \mbox{otherwise.}\\ +\end{array} \right. +\label{eq13} +\end{equation} +More precisely, $\Theta_{p}$ represents the number of active sensor +nodes minus one that cover the primary point~$p$. +In the same way, we define the undercoverage variable +$U_{p}$ of the primary point $p$ as: +\begin{equation} +U_{p} = \left \{ +\begin{array}{l l} + 1 &\mbox{if the primary point $p$ is not covered,} \\ + 0 & \mbox{otherwise.}\\ +\end{array} \right. +\label{eq14} +\end{equation} +There is, of course, a relationship between the three variables $X_j$, $\Theta_p$ and $U_p$ which can be formulated as follows : +\begin{equation} +\sum_{j \in J} \alpha_{jp} X_{j} - \Theta_{p}+ U_{p} =1, \forall p \in P +\end{equation} +If the point $p$ is not covered, $U_p=1$, $\sum_{j \in J} \alpha_{jp} X_{j}=0$ and $\Theta_{p}=0$ by defintion, so the equality is satisfied. +On the contrary, if the point $p$ is covered, $U_p=0$, and $\Theta_{p}=\left( \sum_{j \in J} \alpha_{jp} X_{j} \right)- 1$. +\noindent Our coverage optimization problem can then be formulated as follows: +\begin{equation} \label{eq:ip2r} +\left \{ +\begin{array}{ll} +\min \sum_{p \in P} (w_{\theta} \Theta_{p} + w_{U} U_{p})&\\ +\textrm{subject to :}&\\ +\sum_{j \in J} \alpha_{jp} X_{j} - \Theta_{p}+ U_{p} =1, &\forall p \in P\\ +%\label{c1} +%\sum_{t \in T} X_{j,t} \leq \frac{RE_j}{e_t} &\forall j \in J \\ +%\label{c2} +\Theta_{p}\in \mathbb{N}, &\forall p \in P\\ +U_{p} \in \{0,1\}, &\forall p \in P \\ +X_{j} \in \{0,1\}, &\forall j \in J +\end{array} +\right. +\end{equation} +The objective function is a weighted sum of overcoverage and undercoverage. The goal is to limit the overcoverage in order to activate a minimal number of sensors while simultaneously preventing undercoverage. Both weights $w_\theta$ and $w_U$ must be carefully chosen in +order to guarantee that the maximum number of points are covered during each +period. +} + + + + + + + +\iffalse + \indent Our model is based on the model proposed by \cite{pedraza2006} where the objective is to find a maximum number of disjoint cover sets. To accomplish this goal, the authors proposed an integer program which forces undercoverage @@ -411,6 +483,8 @@ undercoverage. Both weights $w_\theta$ and $w_U$ must be carefully chosen in order to guarantee that the maximum number of points are covered during each period. +\fi + \section{\uppercase{Protocol evaluation}} \label{sec:Simulation Results and Analysis} \noindent \subsection{Simulation framework} @@ -654,14 +728,16 @@ The results depict the good performance of the different versions of our protocol. Indeed, the protocols DiLCO-16/50, DiLCO-32/50, DiLCO-16/95, and DiLCO-32/95 consume less energy than their DESK and GAF counterparts for a similar level of area coverage. This observation reflects the larger number of -nodes set active by DESK and GAF. Now, if we consider a same protocol, we can -notice that the average consumption per period increases slightly for our -protocol when increasing the level of coverage, whereas it increases more -largely for DESK and GAF. In case of DiLCO It means that even if a larger -network allows to improve the number of periods with a minimum coverage level -value, this improvement has a higher energy cost per period due to -communications and a more difficult optimization. However, in comparison with -DSK and GAF, our approach has a reasonable energy overhead. +nodes set active by DESK and GAF. + +Now, if we consider a same protocol, we can notice that the average consumption +per period increases slightly for our protocol when increasing the level of +coverage and the number of node, whereas it increases more largely for DESK and +GAF. In case of DiLCO, it means that even if a larger network allows to improve +the number of periods with a minimum coverage level value, this improvement has +a higher energy cost per period due to communication overhead and a more +difficult optimization problem. However, in comparison with DESK and GAF, our +approach has a reasonable energy overcost. \subsubsection{Execution time}