-
-\noindent {\bf 3.} The communication and information sharing required to cooperate and make these
-decisions was not discussed. \\
-
-\textcolor{blue}{\textbf{\textsc{Answer:} The communication and information sharing required to cooperate and make these decisions was discussed in page 8, lines 48-49. Position coordinates, remaining energy, sensor node ID and number of one-hop neighbors are exchanged.}}\\
-
-
-
-\noindent {\bf 4.} The definitions of the undercoverage and overcoverage variables are not clear. I suggest
-adding some information about these values, since without it, you cannot understand how M and V are computed for the optimization problem. \\
-
-\textcolor{blue}{\textbf{\textsc{Answer:} The perimeter of each sensor may be cut in parts called coverage intervals (CI). The level of coverage of one CI is defined as the number of active sensors neighbours covering this part of the perimeter. If a given level of coverage $l$ is required for one sensor, the sensor is said to be undercovered (respectively overcovered) if the level of coverage of one of its CI is less (respectively greater) than $l$. In other terms, we define undercoverage and overcoverage through the use of variables $M_{i}^{j}$ and $V_{i}^{j}$ for one sensor $j$ and its coverage interval $i$. If the sensor $j$ is undercovered, there exists at least one of its CI (say $i$) for which the number of active sensors (denoted by $l^{i}$) covering this part of the perimeter is less than $l$ and in this case : $M_{i}^{j}=l-l^{i}$, $V_{i}^{j}=0$. In the contrary, if the sensor $j$ is overcovered, there exists at least one of its CI (say $i$) for which the number of active sensors (denoted by $l^{i}$) covering this part of the perimeter is greater than $l$ and in this case : $M_{i}^{j}=0$, $V_{i}^{j}=l^{i}-l$. }}\\
-
-
-
-\noindent {\bf 5.} Can you mathematically justify how you chose the values of alpha and beta? This is not
-very clear. I would suggest possibly adding more results showing how the algorithm performs with different alphas and betas. \\
-
-\textcolor{blue}{\textbf{\textsc{Answer:} The choice of alpha and beta should be made according to the needs of the application. Alpha should be enough large to prevent undercoverage and so to reach the highest possible coverage ratio. Beta should be enough large to prevent overcoverage and so to activate a minimum number of sensors. The values of $\alpha_{i}^{j}$ can be identical for all coverage intervals $i$ of one sensor $j$ in order to express that the perimeter of each sensor should be uniformly covered, but $\alpha_{i}^{j}$ values can be differenciated between sensors to force some regions to be better covered than others. The choice of $\beta \gg \alpha$ prevents the overcoverage, and so limit the activation of a large number of sensors, but as $\alpha$ is low, some areas may be poorly covered. This explains the results obtained for {\it Lifetime50} with $\beta \gg \alpha$: a large number of periods with low coverage ratio. With $\alpha \gg \beta$, we priviligie the coverage even if some areas may be overcovered, so high coverage ratio is reached, but a large number of sensors are activated to achieve this goal. Therefore network lifetime is reduced. The choice $\alpha=0.6$ and $\beta=0.4$ seems to achieve the best compromise between lifetime and coverage ratio. }}\\
-
-
-
-\noindent {\bf 6.} The authors have performed a thorough review of existing coverage methodologies.
-However, the clarity in the literature review is a little off. Some of the descriptions of the method
-s used are very vague and do not bring out their key contributions. Some references are not consistent and I suggest using the journals template to adjust them for overall consistency. \\
-
-\textcolor{blue}{\textbf{\textsc{Answer:} }}\\
-
-
-
-\noindent {\bf 7.} The methodology is implemented in OMNeT++ (network simulator) and tested against 2 existing algorithms and a previously developed method by the authors. The simulation results are thorough and show that the proposed method improves the coverage and network lifetime compared with the 3 existing methods. The results are similar to previous work done by their team. \\
-
-\textcolor{blue}{\textbf{\textsc{Answer:} Although the study conducted in this paper reuses the same protocol presented in our previous work, we focus in this paper on the mathematical optimization model developed to schedule nodes activities. We deliberately chose to keep the same performance indicators to compare the results obtained with this new formulation with other existing algorithms. }}\\
-
-
-\noindent {\bf 8.} Since this paper is attacking the coverage problem, I would like to see more information on the amount of coverage the algorithm is achieving. It seems that there is a tradeoff in this algorithm that allows the network to increase its lifetime but does not improve the coverage ratio. This may be an issue if this approach is used in an application that requires high coverage ratio. \\
-
-\textcolor{blue}{\textbf{\textsc{Answer:} Your remark is interesting. Indeed, figures 8(a) and (b) highlight this result. PeCO methods allows to achieve a coverage ratio greater than $50\%$ for many more periods than the others three methods, but for applications requiring an high level of coverage (greater than $95\%$), DilCO method is more efficient. It is explained at the end of section 5.2.4. }}\\
+\noindent {\bf 3.} The communication and information sharing required to
+cooperate and make these decisions was not discussed.\\
+
+\textcolor{blue}{\textbf{\textsc{Answer:} The communication and information
+ sharing required to cooperate and make these decisions is discussed at the
+ end of page 8. Position coordinates, remaining energy, sensor node ID and
+ number of one-hop neighbors are exchanged.}}\\
+
+\noindent {\bf 4.} The definitions of the undercoverage and overcoverage
+variables are not clear. I suggest adding some information about these values,
+since without it, you cannot understand how M and V are computed for the
+optimization problem.\\
+
+\textcolor{blue}{\textbf{\textsc{Answer:} The perimeter of each sensor may be
+ cut in parts called coverage intervals (CI). The level of coverage of one CI
+ is defined as the number of active sensors neighbors covering this part of
+ the perimeter. If a given level of coverage $l$ is required for one sensor,
+ the sensor is said to be undercovered (respectively overcovered) if the
+ level of coverage of one of its CI is less (respectively greater) than
+ $l$. In other terms, we define undercoverage and overcoverage through the
+ use of variables $M_{i}^{j}$ and $V_{i}^{j}$ for one sensor $j$ and its
+ coverage interval $i$. If the sensor $j$ is undercovered, there exists at
+ least one of its CI (say $i$) for which the number of active sensors
+ (denoted by $l^{i}$) covering this part of the perimeter is less than $l$
+ and in this case : $M_{i}^{j}=l-l^{i}$, $V_{i}^{j}=0$. In the contrary, if
+ the sensor $j$ is overcovered, there exists at least one of its CI (say $i$)
+ for which the number of active sensors (denoted by $l^{i}$) covering this
+ part of the perimeter is greater than $l$ and in this case : $M_{i}^{j}=0$,
+ $V_{i}^{j}=l^{i}-l$. This explanation has been added in the penultimate
+ paragraph of section~4.}}\\
+
+\noindent {\bf 5.} Can you mathematically justify how you chose the values of
+alpha and beta? This is not very clear. I would suggest possibly adding more
+results showing how the algorithm performs with different alphas and betas.\\
+
+\textcolor{blue}{\textbf{\textsc{Answer:} To discuss this point, we added
+ subsection 5.2.5 in which we study the protocol performance, considering
+ $Lifetime_{50}$ and $Lifetime_{95}$ metrics, for different couples of values
+ for alpha and beta. Table 4 presents the results obtained for a WSN of
+ 200~sensor nodes. It explains the value chosen for the simulation settings
+ in Table~2. \\ \indent The choice of alpha and beta should be made according
+ to the needs of the application. Alpha should be enough large to prevent
+ undercoverage and so to reach the highest possible coverage ratio. Beta
+ should be enough large to prevent overcoverage and so to activate a minimum
+ number of sensors. The values of $\alpha_{i}^{j}$ can be identical for all
+ coverage intervals $i$ of one sensor $j$ in order to express that the
+ perimeter of each sensor should be uniformly covered, but $\alpha_{i}^{j}$
+ values can be differentiated between sensors to force some regions to be
+ better covered than others. The choice of $\beta \gg \alpha$ prevents the
+ overcoverage, and so limit the activation of a large number of sensors, but
+ as $\alpha$ is low, some areas may be poorly covered. This explains the
+ results obtained for $Lifetime_{50}$ with $\beta \gg \alpha$: a large number
+ of periods with low coverage ratio. With $\alpha \gg \beta$, we favor the
+ coverage even if some areas may be overcovered, so high coverage ratio is
+ reached, but a large number of sensors are activated to achieve this goal.
+ Therefore network lifetime is reduced. The choice $\alpha=0.6$ and
+ $\beta=0.4$ seems to achieve the best compromise between lifetime and
+ coverage ratio.}}\\
+
+\noindent {\bf 6.} The authors have performed a thorough review of existing
+coverage methodologies. However, the clarity in the literature review is a
+little off. Some of the descriptions of the method s used are very vague and do
+not bring out their key contributions. Some references are not consistent and I
+suggest using the journals template to adjust them for overall consistency.\\
+
+\textcolor{blue}{\textbf{\textsc{Answer:} References have been carefully checked
+ and seem to be consistent with the journal template. In section~2, ``Related
+ literature'', we refer to papers dealing with coverage and lifetime in
+ WSN. Each paragraph of this section discusses the literature related to a
+ particular aspect of the problem : 1. types of coverage, 2. types of scheme,
+ 3. centralized versus distributed protocols, 4. optimization method. At the
+ end of each paragraph we position our approach.}}\\
+
+\noindent {\bf 7.} The methodology is implemented in OMNeT++ (network simulator)
+and tested against 2 existing algorithms and a previously developed method by
+the authors. The simulation results are thorough and show that the proposed
+method improves the coverage and network lifetime compared with the 3 existing
+methods. The results are similar to previous work done by their team.\\
+
+\textcolor{blue}{\textbf{\textsc{Answer:} Although the study conducted in this
+ paper reuses the same protocol presented in our previous work, we focus in
+ this paper on the mathematical optimization model developed to schedule
+ nodes activities. We deliberately chose to keep the same performance
+ indicators to compare the results obtained with this new formulation with
+ other existing algorithms.}}\\
+
+\noindent {\bf 8.} Since this paper is attacking the coverage problem, I would
+like to see more information on the amount of coverage the algorithm is
+achieving. It seems that there is a tradeoff in this algorithm that allows the
+network to increase its lifetime but does not improve the coverage ratio. This
+may be an issue if this approach is used in an application that requires high
+coverage ratio. \\
+
+\textcolor{blue}{\textbf{\textsc{Answer:} Your remark is interesting. Indeed,
+ Figures 8(a) and (b) highlight this result. PeCO protocol allows to achieve
+ a coverage ratio greater than $50\%$ for far more periods than the others
+ three methods, but for applications requiring a high level of coverage
+ (greater than $95\%$), DiLCO method is more efficient. It is explained at
+ the end of section 5.2.4.}}\\