X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/LiCO.git/blobdiff_plain/5175c89ff4c745da83f234c9fcad6bdd08fc8eb1..00eccf4c375faace404369698b8488f4cab301d4:/PeCO-EO/reponse.tex diff --git a/PeCO-EO/reponse.tex b/PeCO-EO/reponse.tex index ff5a80c..e99e712 100644 --- a/PeCO-EO/reponse.tex +++ b/PeCO-EO/reponse.tex @@ -23,7 +23,7 @@ \hspace{-2cm}University of Franche-Comt\'e -\hspace{-2cm}IUT Belfort-Montbéliard, BP 527, 90016 Belfort Cedex, France. +\hspace{-2cm}IUT Belfort-Montb\'eliard, BP 527, 90016 Belfort Cedex, France. \bigskip @@ -75,14 +75,14 @@ decisions was not discussed. \\ \noindent {\bf 4.} The definitions of the undercoverage and overcoverage variables are not clear. I suggest adding some information about these values, since without it, you cannot understand how M and V are computed for the optimization problem. \\ -\textcolor{blue}{\textbf{\textsc{Answer:} The perimeter of each sensor may be cut in parts called coverage intervals (CI). The level of coverage of one CI is defined as the number of active sensors neighbours covering this part of the perimeter. If a given level of coverage $l$ is required for one sensor, the sensor is said to be undercovered (respectively overcovered) if the level of coverage of one of its CI is less (respectively greater) than $l$. In other terms, we define undercoverage and overcoverage through the use of variables $M_{i}^{j}$ and $V_{i}^{j}$ for one sensor $j$ and its coverage interval $i$. If the sensor $j$ is undercovered, there exists at least one of its CI (say $i$) for which the number of active sensors (denoted by $l^{i}$) covering this part of the perimeter is less than $l$ and in this case : $M_{i}^{j}=l-l^{i}$, $V_{i}^{j}=0$. In the contrary, if the sensor $j$ is overcovered, there exists at least one of its CI (say $i$) for which the number of active sensors (denoted by $l^{i}$) covering this part of the perimeter is greater than $l$ and in this case : $M_{i}^{j}=0$, $V_{i}^{j}=l^{i}-l$. }}\\ +\textcolor{blue}{\textbf{\textsc{Answer:} The perimeter of each sensor may be cut in parts called coverage intervals (CI). The level of coverage of one CI is defined as the number of active sensors neighbours covering this part of the perimeter. If a given level of coverage $l$ is required for one sensor, the sensor is said to be undercovered (respectively overcovered) if the level of coverage of one of its CI is less (respectively greater) than $l$. In other terms, we define undercoverage and overcoverage through the use of variables $M_{i}^{j}$ and $V_{i}^{j}$ for one sensor $j$ and its coverage interval $i$. If the sensor $j$ is undercovered, there exists at least one of its CI (say $i$) for which the number of active sensors (denoted by $l^{i}$) covering this part of the perimeter is less than $l$ and in this case : $M_{i}^{j}=l-l^{i}$, $V_{i}^{j}=0$. In the contrary, if the sensor $j$ is overcovered, there exists at least one of its CI (say $i$) for which the number of active sensors (denoted by $l^{i}$) covering this part of the perimeter is greater than $l$ and in this case : $M_{i}^{j}=0$, $V_{i}^{j}=l^{i}-l$. This explanation has been added at the end of section~4. }}\\ \noindent {\bf 5.} Can you mathematically justify how you chose the values of alpha and beta? This is not very clear. I would suggest possibly adding more results showing how the algorithm performs with different alphas and betas. \\ -\textcolor{blue}{\textbf{\textsc{Answer:} The choice of alpha and beta should be made according to the needs of the application. Alpha should be enough large to prevent undercoverage and so to reach the highest possible coverage ratio. Beta should be enough large to prevent overcoverage and so to activate a minimum number of sensors. The values of $\alpha_{i}^{j}$ can be identical for all coverage intervals $i$ of one sensor $j$ in order to express that the perimeter of each sensor should be uniformly covered, but $\alpha_{i}^{j}$ values can be differenciated between sensors to force some regions to be better covered than others. The choice of $\beta \gg \alpha$ prevents the overcoverage, and so limit the activation of a large number of sensors, but as $\alpha$ is low, some areas may be poorly covered. This explains the results obtained for {\it Lifetime50} with $\beta \gg \alpha$: a large number of periods with low coverage ratio. With $\alpha \gg \beta$, we priviligie the coverage even if some areas may be overcovered, so high coverage ratio is reached, but a large number of sensors are activated to achieve this goal. Therefore network lifetime is reduced. The choice $\alpha=0.6$ and $\beta=0.4$ seems to achieve the best compromise between lifetime and coverage ratio. }}\\ +\textcolor{blue}{\textbf{\textsc{Answer:} The choice of alpha and beta should be made according to the needs of the application. Alpha should be enough large to prevent undercoverage and so to reach the highest possible coverage ratio. Beta should be enough large to prevent overcoverage and so to activate a minimum number of sensors. The values of $\alpha_{i}^{j}$ can be identical for all coverage intervals $i$ of one sensor $j$ in order to express that the perimeter of each sensor should be uniformly covered, but $\alpha_{i}^{j}$ values can be differenciated between sensors to force some regions to be better covered than others. The choice of $\beta \gg \alpha$ prevents the overcoverage, and so limit the activation of a large number of sensors, but as $\alpha$ is low, some areas may be poorly covered. This explains the results obtained for {\it Lifetime50} with $\beta \gg \alpha$: a large number of periods with low coverage ratio. With $\alpha \gg \beta$, we favor the coverage even if some areas may be overcovered, so high coverage ratio is reached, but a large number of sensors are activated to achieve this goal. Therefore network lifetime is reduced. The choice $\alpha=0.6$ and $\beta=0.4$ seems to achieve the best compromise between lifetime and coverage ratio. This discussion about the impact of the values of alpha and beta on the protocol performance is added as subsection 5.2.5. }}\\ @@ -90,7 +90,7 @@ very clear. I would suggest possibly adding more results showing how the algorit However, the clarity in the literature review is a little off. Some of the descriptions of the method s used are very vague and do not bring out their key contributions. Some references are not consistent and I suggest using the journals template to adjust them for overall consistency. \\ -\textcolor{blue}{\textbf{\textsc{Answer:} }}\\ +\textcolor{blue}{\textbf{\textsc{Answer:} References have been carefully checked and seem to be consistent with the journal template. In section "related works" we refer to papers dealing with coverage and lifetime in WSN. Each paragraph of this section discusses the literature related to a particular aspect of the problem : 1.Type of coverage, 2.Type of scheme, 3.Centralized versus Distributed, 4. Optimization method. At the end of each paragraph we position our method.}}\\ @@ -101,9 +101,9 @@ s used are very vague and do not bring out their key contributions. Some referen \noindent {\bf 8.} Since this paper is attacking the coverage problem, I would like to see more information on the amount of coverage the algorithm is achieving. It seems that there is a tradeoff in this algorithm that allows the network to increase its lifetime but does not improve the coverage ratio. This may be an issue if this approach is used in an application that requires high coverage ratio. \\ -\textcolor{blue}{\textbf{\textsc{Answer:} Your remark is interesting. Indeed, figures 8(a) and (b) highlight this result. PeCO methods allows to achieve a coverage ratio greater than $50\%$ for many more periods than the others three methods, but for applications requiring an high level of coverage (greater than $95\%$), DilCO method is more efficient. }}\\ +\textcolor{blue}{\textbf{\textsc{Answer:} Your remark is interesting. Indeed, figures 8(a) and (b) highlight this result. PeCO methods allows to achieve a coverage ratio greater than $50\%$ for many more periods than the others three methods, but for applications requiring an high level of coverage (greater than $95\%$), DilCO method is more efficient. It is explained at the end of section 5.2.4. }}\\ -%%%%%%%%%%%%%%%%%%%%%% ENGLISH and GRAMMER %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +%%%%%%%%%%%%%%%%%%%%%% ENGLISH and GRAMMAR %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent\textcolor{black}{\textbf{\Large English and Grammar:}} \\ @@ -146,7 +146,7 @@ s used are very vague and do not bring out their key contributions. Some referen \noindent { \ding{90} Lots of English and grammar mistakes. I recommend rereading the paper line by line and adjusting the sentences that do not make sense.} \\ -\textcolor{blue}{\textbf{\textsc{Answer:} ?????? relecture par Ingrid }}.\\ +\textcolor{blue}{\textbf{\textsc{Answer:} The paper has been carefully reread, and the readability of the paper (English) has been improved. }}\\ @@ -159,56 +159,57 @@ The paper entitled "Perimeter-based Coverage Optimization to Improve Lifetime in \noindent {\bf 1.} The protocol framework is not described in details. In particular, the spatial and temporal subdivision (page 2, line 11) that is at the core of PeCO, is not described nor justified in much detail. How to implement an efficient spatial subdivision? On page 10, line 11, the number of subdivisions is said to be equal to 16, but the clustering algorithm used is not mentioned. Is this number dependent of the size of the sensing area? Of the number of sensors? Of the sensing range? The proposed protocol cannot be adopted by practitioners if such an important step is not documented. Temporal subdivision suffers from the same lack of description and justification: why should time intervals have the same duration? If they have the same duration, how should this common duration should be chosen? \\ -\textcolor{blue}{\textbf{\textsc{Answer:} Spatial and temporal choices of subdivision are not the topics of the paper. In the study, we assume that the deployment of sensors is almost uniformly over the region. So we only need to fix a regular division of the region into subregions to make the problem tractable. The subdivision is made such that the number of hops between any pairs of sensors inside a subregion is less than or equal to 3. Concerning the choice of the sensing period duration, it is correlated with the types of applications, with the amount of initial energy in sensors batteries and also with the duration of the exchange phase. All applications do not have the same requirements of Quality of Service. Here information exchange is executed every hour but the length of the sensing period could be reduced and adapted dynamically. On the one hand a small sensing period would allow to be more reliable but would have higher communication costs. On the other hand the choice of a long duration may cause problems in case of nodes failure during the sensing period.}}\\ +\textcolor{blue}{\textbf{\textsc{Answer:} Spatial and temporal choices of subdivision are not the topics of the paper. In the study, we assume that the deployment of sensors is almost uniformly over the region. So we only need to fix a regular division of the region into subregions to make the problem tractable. The subdivision is made such that the number of hops between any pairs of sensors inside a subregion is less than or equal to 3. Concerning the choice of the sensing period duration, it is correlated with the types of applications, with the amount of initial energy in sensors batteries and also with the duration of the exchange phase. All applications do not have the same requirements of Quality of Service. Here information exchange is executed every hour but the length of the sensing period could be reduced and adapted dynamically. On the one hand a small sensing period would allow to be more reliable but would have higher communication costs. On the other hand the choice of a long duration may cause problems in case of nodes failure during the sensing period. Explanation has been given throughout the paper. In particular the sensing duration is discussed in section 5.1.}}\\ \noindent {\bf 2.}Page 9, Section 4, is the Perimeter-based coverage problem NP-hard? This question is important for justifying the use of a Mixed Integer Linear Programming model. \\ \textcolor{blue}{\textbf{\textsc{Answer:} The perimeter scheduling coverage problem is NP-hard in general, it has been proved - in the paper entitled "Perimeter Coverage Scheduling in Wireless Sensor Networks Using Sensors with a Single Continuous Cover Range" from Ka-Shun Hung and King-Shan Lui (EURASIP Journal on Wireless Communications and Networking 2010, 2010:926075 doi:10.1155/2010/926075). In this paper, authors study the coverage of the perimeter of a large object requiring to be monitored. In our study, the large object to be monitored is the sensor itself (or more precisely its sensing area). }}\\ + in the paper entitled "Perimeter Coverage Scheduling in Wireless Sensor Networks Using Sensors with a Single Continuous Cover Range" from Ka-Shun Hung and King-Shan Lui (EURASIP Journal on Wireless Communications and Networking 2010, 2010:926075 doi:10.1155/2010/926075). In this paper, authors study the coverage of the perimeter of a large object requiring to be monitored. In our study, the large object to be monitored is the sensor itself (or more precisely its sensing area). This point has been highlighted at the beginning of section 4. }}\\ \noindent {\bf 3.} Page 9, the major problem with the present paper is, in my opinion, the objective function of the Mixed Integer Linear Program (2). It is not described in the paper, and looks like an attempt to address a multiobjective problem (like minimizing overcoverage and undercoverage). However, using a weighted sum is well known not to be an efficient way to address biobjective problems. The introduction of various performance metrics in Section 5.1 also suggests that the authors have not decided exactly which objective function to use, and compare their protocols against competitors without mentioning the exact purpose of each of them. If the performance metrics list given in Section 5.1 is exhaustive, then the authors should mention at the beginning of the paper what are the aims of the protocol, and explain how the protocol is built to optimize these objectives. \\ -\textcolor{blue}{\textbf{\textsc{Answer:} As far as we know, representing the objective function as a weighted sum of criteria to be minimized in case of multicriteria optimization is a classical method. }}\\ +\textcolor{blue}{\textbf{\textsc{Answer:} Right. The mixed Integer Linear Program adresses a multiobjective problem, where the goal is to minimize overcoverage and undercoverage for each coverage interval for each sensor. As far as we know, representing the objective function as a weighted sum of criteria to be minimized in case of multicriteria optimization is a classical method. In section 5, the comparison of protocols with a large variety of performance metrics allows to select the most appropriate method according to the QoS requirement of the application. }}\\ \noindent {\bf 4.}Page 11 Section 5.2, the sensor nodes are said to be based on Atmels AVR ATmega103L microcontroller. If I am not mistaken, these devices have 128 KBytes of memory, and I didn't find any clue that they can run an operating system like Linux. This point is of primary importance for the proposed protocol, since GLPK (a C API) is supposed to be executed by the cluster leader. In addition to that, GLPK requires a non negligible amount of memory to run properly, and the Atmels AVR ATmega103L microcontroller might be insufficient for that purpose. The authors are urged to provide references of previous works showing that these technical constraints are not preventing their protocol to be implemented on the aforementioned microcontroller. Then, on page 13, in Section "5.2.3 Energy Consumption", the estimation of $E_p^{com}$ for the considered microcontroller seems quite challenging and should be carefully documented. Indeed, this is a key point in providing a fair comparison of PeCO with its competitors. \\ -\textcolor{blue}{\textbf{\textsc{Answer:} +\textcolor{blue}{\textbf{\textsc{Answer 1:} To implement PeCO on real sensors nodes with limited memories capacities, we can act on : \begin{itemize} -\item the solver : GLPK is memory consuming for the resolution of integer programming (IP) compared with other commercial solvers like CPLEX\texttrademark. Commercial solvers generally outperform open source solvers (See "Analysis of commercial and free and open source -solvers for linear optimization problems" by B. Meindl and M. Templ from Vienna University of Technology). Memory use depends on the number of variables and number of constraints. For linear programs (LP), a reasonable estimate of memory use with CPLEX\texttrademark is to allow one megabyte per thousand constraints. For integer programs, no simple formula exists since memory use depends so heavily on the size of the branch and bound tree. But, the estimate for linear programs still provides a lower bound. In our case, the characteristics of the integer programming (2) are the following:\\ +\item the solver : GLPK is memory consuming for the resolution of integer programming (IP) compared with other commercial solvers like CPLEX\textregistered. Commercial solvers generally outperform open source solvers (See the report : "Analysis of commercial and free and open source +solvers for linear optimization problems" by B. Meindl and M. Templ from Vienna University of Technology). Memory use depends on the number of variables and number of constraints. For linear programs (LP), a reasonable estimate of memory use with CPLEX +\textregistered is to allow one megabyte per thousand constraints. For integer programs, no simple formula exists since memory use depends so heavily on the size of the branch and bound tree (B \& B tree). But, the estimate for linear programs still provides a lower bound. In our case, the characteristics of the integer programming (2) are the following: \begin{itemize} \item number of variables : $S* (2*I+1)$ \item number of constraints : $2* I *S$ \item number of non-zero coefficients : $2* I *S * B$ \item number of parameters (in the objective function) : $2* I *S$ \end{itemize} -where $S$ denotes the number of sensors in the subregion, $I$ the average number of cover intervals per sensor, $B$ the average number of sensors involved in a cover interval. The following table gives the memory used with GLPK to solve the integer program (column 3) and its LP-relaxation (column 4) for different problem sizes. The sixth column gives an estimate of the memory used with CPLEX to solves the LP-relaxation according to the number of constraints. +where $S$ denotes the number of sensors in the subregion, $I$ the average number of cover intervals per sensor, $B$ the average number of sensors involved in a coverage interval. The following table gives the memory use with GLPK to solve the integer program (column 3) and its LP-relaxation (column 4) for different problem sizes. The sixth column gives an estimate of the memory use with CPLEX\textregistered to solve the LP-relaxation according to the number of constraints. \\ -\begin{tabular}{|c|c|c|c|c|c|} +\begin{tabular}{|c|c|c|c|c|c|r|} \hline -Total number of nodes& S & I & GLPK IP & GLPK LP & number of nodes in the &CPLEX\\ -of nodes &&&&relaxation &branch-and-bound tree &\\ +Total number & S & I & GLPK IP & GLPK LP & nodes&CPLEX\\ +of nodes &&&&relaxation &B\&B tree &\\ \hline -100 & 6.25& 5&0.2 Mb & 0.2 Mb &1 & 64 Kb\\ +100 & 6.25& 5&0.2 MB & 0.2 Mb &1 & 64 kB\\ \hline -200 & 12.5& 11&1.7 Mb & 1.6 Mb &1 & 281 Kb\\ +200 & 12.5& 11&1.7 MB & 1.6 Mb &1 & 281 kB\\ \hline -300 &18.5 & 17&3.6 Mb & 3.5 Mb & 3 &644 Kb\\ +300 &18.5 & 17&3.6 MB & 3.5 Mb & 3 &644 kB\\ \hline \end{tabular} -It is noteworthy that the difference of memory used with GLPK between the resolution of the IP and its LP-relaxation is very weak (not more than 0.1 Mb). The size of the branch and bound tree dos not exceed 3 nodes. This result leads one to believe the memory used with CPLEX for solving the IP would be very close to that for the LP-relaxation, that is to say around 100 Kb for a subregion containing $S=10$ sensors. Moreover the IP seems to have some specifities that encourage us to develop our own solver (coefficents matrix is very sparse) or to use an existing heuristic to find good approximate solution (). - - - -\item the subdivision of the region of interest. To make the resolution of integer programming tractable by a leader sensor, we need to limit the number of nodes in each subregion (the number of variables and constraints of the integer programming is directly depending on the number of nodes and neigbors). It is therefore necessary to adapt the subdvision according to the number of sensors deployed in the area and their sensing range (impact on the number of cover intervals). -\item heuristic - -\end{itemize}}}\\ - +\\ +It is noteworthy that the difference of memory used with GLPK between the resolution of the IP and its LP-relaxation is very weak (not more than 0.1 MB). The size of the branch and bound tree dos not exceed 3 nodes. This result leads one to believe that the memory use with CPLEX\textregistered for solving the IP would be very close to that for the LP-relaxation, that is to say around 100 Kb for a subregion containing $S=10$ sensors. Moreover the IP seems to have some specifities that encourage us to develop our own solver (coefficents matrix is very sparse) or to use an existing heuristic to find good approximate solutions (Reference : "A feasibility pump heuristic for general mixed-integer problems", Livio Bertacco and Matteo Fischetti and Andrea Lodi, Discrete Optimization, issn 1572-5286). +\item the subdivision of the region of interest. To make the resolution of integer programming tractable by a leader sensor, we need to limit the number of nodes in each subregion (the number of variables and constraints of the integer programming is directly depending on the number of nodes and neigbors). It is therefore necessary to adapt the subdvision according to the number of sensors deployed in the area and their sensing range (impact on the number of coverage intervals). +\end{itemize} +A discussion about memory consumption has been added in section 5.2}} +\bigskip +\textcolor{blue}{\textbf{\textsc{Answer 2:} +In section 5.2 we give a table with the power consumption values which are used to compute the energy consumption. These ones are based on the energy model of (Vu et al. 2006). +}} @@ -217,7 +218,7 @@ It is noteworthy that the difference of memory used with GLPK between the resolu \noindent {\ding{90} Page 12, lines 7-15, the authors mention that DiLCO protocol is close to PeCO. This should be mentioned earlier in the paper, ideally in Section 2 (Related Literature), along with the detailed description of DESK and GAF, the competitors of the proposed protocol, PeCO. } \\ -\textcolor{blue}{\textbf{\textsc{Answer:} }}.\\ +\textcolor{blue}{\textbf{\textsc{Answer:} Right. This observation has been added at the end of the introduction}}.\\ @@ -266,7 +267,7 @@ It is noteworthy that the difference of memory used with GLPK between the resolu \noindent {\ding{90} Page 7, line 20 "regular homogeneous subregions" is too vague. } \\ -\textcolor{blue}{\textbf{\textsc{Answer:} As mentioned in the previous remark, the spatial subdivision was not clearly explained in the paper. We added a discussion about this question in the article. Thank you for highlighting it. A FAIRE }}.\\ +\textcolor{blue}{\textbf{\textsc{Answer:} As mentioned in the previous remark, the spatial subdivision was not clearly explained in the paper. We added a discussion about this question in the article. Thank you for highlighting it. }}.\\ \noindent {\ding{90} Page 7, line 24, replace "figure 4" with "Figure 4"} \\