-The paper addresses the problem of lifetime coverage in wireless sensor networks. The main issue here is the energy to maintain full coverage of the network while achieving sensing, communication, and computation tasks. The author suggest a new protocol, named DiLCO, aiming at solving the aforementioned objective using a discrete optimization approach. The focus of the paper is clear and the basic idea looks attractive. However, from my opinion, number of clarifications are needed in order for me to be able to validate the whole contribution of the authors. Some of them include:
-
-\noindent {\bf 1. The concept of efficiency is not clearly stated, is it the amount of energy used by the protocol or the time it takes to completion ? (line 52 of the introduction "most efficient")} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} The concept of efficiency refers to the ability of maintaining the best coverage as long as possible. As previously explained, the model with the appropriate weights ensures that a maximum number of points are
- covered by the set of still alive sensors. The efficiency is measured through
- the performance metrics "coverage ratio" and "network lifetime". Coverage ratio remains around 100\% as
- long as possible (as long as there are enough alive sensors to cover all
- primary points) and then decreases. Network Lifetime is defined as the time until the coverage ratio drops below a predefined threshold. }}\\
-
-\noindent {\bf 2. The topology of the graph is not considered in the paper. Isn't it important ? In which class of graphs the author think they will perform better ? are there some disadvantageous topologies ?} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} The study of the topology of the graph is out of the scope of our paper. We do not focus on specific patterns of sensors' deployment. We consider a highly dense network of sensors uniformly deployed in the area of interest. }}\\
-%Uniform graph partition is used by subdividing the sensing field into smaller subgraphs (subregion) using divide-and-conquer concept. The subgraph consists of sensor nodes which are previously deployed over the sensing field uniformly with high density to ensure that any primary point on the sensing field is covered by at least one sensor node. The graph partition problem has gained importance due to its application for clustering. The topology of the graph has important impact on the protocol performance. Random graph has negative effect on our DiLCO protocol because we suppose that the sensing field is subdivided uniformly. }}
-
-\noindent {\bf 3. In line 42 of section 3, why do we need $R_c \geq 2R_s$ ? Isn't it sufficient to have $Rc > Rs$ ? What is the implication of a stronger hypothesis ? How realistic is it ? Again, this raised the question of the topology.}\\
-\textcolor{blue}{\textbf{\textsc{Answer :} We assume that the communication range $R_c$ satisfies the condition $Rc \geq 2R_s$. In fact, Zhang and Hou ("Maintaining Sensing Coverage and. Connectivity in Large Sensor Networks",2005) proved that if the transmission range fulfills the previous hypothesis, the complete coverage of a convex area implies connectivity among active nodes. In this paper, communication ranges and sensing ranges of real sensors are given. Communication range is comprised between 30 and 300 meters. And the sensing range does not exceed 30m. In the case of MEDUSA II sensor node,...........}}\\
-
-\noindent {\bf 4. In line 63 of subsection 3.2, it is not clear why the periodic scheduling is in favor of a more robust network. Please, explain.} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} We explain it in the subsection 3.2. : " A periodic scheduling is
-interesting because it enhances the robustness of the network against node failures. First, a node that has not enough energy to complete a period, or
-which fails before the decision is taken, will be excluded from the scheduling
-process. Second, if a node fails later, whereas it was supposed to sense the
-region of interest, it will only affect the quality of the coverage until the
-definition of a new cover set in the next period. " }}\\
-
-\noindent {\bf 5. The next sentence mention "enough energy to complete a period". This is another point where the author could be more rigorous. Indeed, how accurate is the evaluation of the required energy for a period ?} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} The evaluation of the required energy to complete a period takes into account the energy consumed for information exchange with neigbors inside a subregion and the energy needed to stay active during the sensing period. Here, the sensing period duration is equal to one hour but may adapted dynamically according to the QoS requirements. The threshold value $E_{th}$ has been fixed to 36 Joules. This value has been computed by multiplying the energy consumed in the active state (9.72 mW) by the time in second for one period (3600 seconds), and adding the energy for the pre-sensing phases. We explain that in subsection 5.1. In our simulation, the time computation required by a leader to solve the integer program does not exceed 1000 seconds regardless the size of the network and the number of subregions (see figure 4), except the case with two subregions (DilCO-2) where the times computation become much too long as the network size increases. So the energy required for computation $E^{comp}$, estimated to 26.83 mW per second, will never exceed 26.83 Joules. All sensors whose remaining energy is greater than $E_{th}=36$ Joules are potential leaders. Once a leader is selected, it will be itself included in the coverage problem formulation only if its remaining energy before computation is greater than $E_{th}+E^{comp}$. We added a sentence in section 3.2. before the description of the algorithm to clarify this point. Recall that $E^{comp}>E_{th}$ makes no sense. In such a case, the energy required for the decision phase would be greater than the energy required for the sensing phase.}}\\
-
-\noindent {\bf 6. About the information collected (line 36-38) , what are they used for ?} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} The information collected is used for leader election and decision phases. Details on the INFO packet have been added at the end of section~3.2. After
- the information exchange among the sensor nodes in the subregion, each node
- will have all the information needed to decide if it will be the leader or
- not. The decision is based on selecting the sensor node that has the larger
- number of one-hop neighbors. If this value is the same for many sensors, the
- node that has the largest remaining energy will be selected as a leader. If
- there exists sensors with the same number of neighbors and the same value
- for the remaining energy, the sensor node that has the largest index will be
- finally selected as a leader. }}\\
-
-\noindent {\bf 7. The way the leader is elected could emphasize first on the remaining energy. Is it sure that the remaining energy will be sufficient to solve the integer program algorithm ?} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} You are right. We have answered this question in previous comments. Remaining energy for DiLCO-4, DiLCO-8, DiLCO-16, and DiLCO-32 protocol versions will be sufficient to solve the integer program algorithm (see Figure 4: Execution time in seconds) in so far as the time computation does not exceed 1000 seconds. Therefore the energy required for computation $E^{comp}$, estimated to 26.83 mW per second, will never exceed 26.83 Joules. However only sensors able to be alive during one sensing period will be included in the coverage problem formulation. To sum up, a sensor may be elected as a leader only if its remaining energy is greater than $E^{comp}$, a leader may participate in the sensing phase only if its remaining energy is greater than $E_{th}+E^{comp}$. Recall that $E_{th}>E^{comp}$.}}\\
-
-\noindent {\bf 8. Regarding the MIP formulation at the end of section 4, the first constraint does not appear as a constraint for me as it is an invariant (as shown on top)} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} This constraint is essential to make the integer program consistent. Whithout this constraint, one optimal solution may be $\theta_p=0 \quad \forall p \in P$, and $U_p=0 \quad \forall p \in P$, whatever the values of $X_j$. And no real optimization is performed. }}\\
-
-\noindent {\bf 9. How $ w_\theta $ and $ w_U $ are chosen ? (end of section 4). How dependent if the method toward these parameters ?} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} Both weights $ w_\theta $ and $ w_U $ must be carefully chosen in order to guarantee that the maximum number of points are covered during each period. In fact, $ w_U $ should be large enough compared to $w_{\Theta}$ to prevent overcoverage and so to activate a minimum number of sensors. We discuss this point in our answer for question 4 of reviewer 3.}}\\
-
-\noindent {\bf 10. In table 2, the "listening" and the "computation" status are both (ON, ON, ON), is that correct ?} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} Yes, in both cases, sensors continue their processing, communication, and sensing tasks. }}\\
-
-\noindent {\bf 11. In line 60-61, you choose active energy as reference, is that sufficient for the computation ?} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} We discuss this point in our answers for question 5 and 7.}}\\
-
-\noindent {\bf 12. The equation of EC has the communication energy duplicated} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} In fact, there is no duplication. The first one, denoted $E^{\scriptsize \mbox{com}}_m$, represents the energy consumption spent by all the nodes for wireless
-communications during period $m$. The second, $E^{\scriptsize \mbox{comp}}_m$ refers to the energy needed by all the leader nodes to solve the integer program during a period. }}\\
-
-\noindent {\bf 13. Figure 2 should be discussed including the initial energy and the topology of the graph} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} Each node has an initial energy level, in Joules, which is randomly drawn in $[500-700]$. If its energy provision reaches a value below the threshold $E_{th}$ = 36 Joules, the minimum energy
-needed for a node to stay active during one period, it will no longer take part in the coverage task. As previously explained in answer 2, we consider a highly dense network of sensors uniformly deployed in the area of interest.}}\\
-
-\noindent {\bf 14. You mention a DELL laptop. How this could be assimilated to a sensor ?} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} In fact, simulations are performed on a laptop DELL. But to be consistent with the use of real sensors in practice, we multiply the execution times obtained with the DELL laptop by a constant. This is explained in subsection 5.2.3.}}\\
-
-\noindent {\bf 15. In figure 4, what makes the execution times different ?} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} The execution times are different according to the size of the integer problem to solve. The size of the problem depends on the number of variables and
- constraints. The number of variables is linked to the number of alive sensors
- $J$, and the number of primary points
- $P$. Thus the integer program contains $J$ variables of type $X_j$,
- $P$ overcoverage variables and $P$ undercoverage variables. The number of
- constraints is equal to $P$.}}\\
-
-\noindent {\bf 16. Why is it important to mention a divide-and-conquer approach (conclusion)} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} it is important to mention a divide-and-conquer approach because of the subdivision of the sensing field is based on this concept. }}\\
-
-\noindent {\bf 17. The connectivity among subregion should be studied too.} \\
-\textcolor{blue}{\textbf{\textsc{Answer :} Yes you are right, we will investigated in future. }}\\\\
-
-
-
-
-
+The paper addresses the problem of lifetime coverage in wireless sensor networks. The main issue here is the energy to maintain full coverage of the network while achieving sensing, communication, and computation tasks. The author suggest a new protocol, named DiLCO, aiming at solving the aforementioned objective using a discrete optimization approach. The focus of the paper is clear and the basic idea looks attractive. However, from my opinion, number of clarifications are needed in order for me to be able to validate the whole contribution of the authors. Some of them include:
+
+\noindent {\bf 1. The concept of efficiency is not clearly stated, is it the amount of energy used by the protocol or the time it takes to completion ? (line 52 of the introduction ``most efficient'')}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} The concept of efficiency
+ refers to the ability of maintaining the best coverage as long as
+ possible. As previously explained, the model with the appropriate
+ weights ensures that a maximum number of points are covered by the
+ set of still alive sensors. The efficiency is measured through the
+ performance metrics ``coverage ratio'' and ``network
+ lifetime''. The coverage ratio remains around 100\% as long as possible (as long as there are enough alive sensors to cover all primary points) and then decreases. Network Lifetime is defined as the time until the coverage ratio drops below a predefined threshold.}}\\
+
+\noindent {\bf 2. The topology of the graph is not considered in the paper. Isn't it important ? In which class of graphs the author think they will perform better ? are there some disadvantageous topologies ?}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} The study of the topology of the graph is out of the scope of our paper. We do not focus on specific patterns of sensors' deployment. We consider a highly dense network of sensors uniformly deployed in the area of interest. }}\\
+%Uniform graph partition is used by subdividing the sensing field into smaller subgraphs (subregion) using divide-and-conquer concept. The subgraph consists of sensor nodes which are previously deployed over the sensing field uniformly with high density to ensure that any primary point on the sensing field is covered by at least one sensor node. The graph partition problem has gained importance due to its application for clustering. The topology of the graph has important impact on the protocol performance. Random graph has negative effect on our DiLCO protocol because we suppose that the sensing field is subdivided uniformly. }}
+
+\noindent {\bf 3. In line 42 of section 3, why do we need $R_c \geq 2R_s$ ? Isn't it sufficient to have $Rc > Rs$ ? What is the implication of a stronger hypothesis ? How realistic is it ? Again, this raised the question of the topology.}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} We assume that the
+ communication range $R_c$ satisfies the condition $Rc \geq
+ 2R_s$. In fact, Zhang and Hou (``Maintaining Sensing Coverage and
+ Connectivity in Large Sensor Networks'', 2005) proved that if the
+ transmission range fulfills the previous hypothesis, the complete
+ coverage of a convex area implies connectivity among active
+ nodes. In this paper, communication ranges and sensing ranges of
+ real sensors are given. Usually, the communication range goes from
+ several tens of meters up to several hundreds (typically between
+ 30 and 300 meters) and the sensing range does not exceed 30 m. In the case of MEDUSA II sensor node, communications are performed by a TR1000 RFM Radio transceiver, which has transmission power of 0.75~mW at maximum and an approximate transmission range of 20~m.}}\\
+
+\noindent {\bf 4. In line 63 of subsection~3.2, it is not clear why the periodic scheduling is in favor of a more robust network. Please, explain.}\\
+\textcolor{blue}{\textbf{\textsc{Answer :} We explain it in subsection~3.2.: ``A periodic scheduling is interesting because it enhances the robustness of the network against node failures. First, a node that has not enough energy to complete a period, or which fails before the decision is taken, will be excluded from the scheduling process. Then, if a node fails later, while it was supposed to sense the region of interest, it will only affect the quality of the coverage until the definition of a new cover set in the next period.''}}\\
+
+\noindent {\bf 5. The next sentence mention ``enough energy to complete a period''. This is another point where the author could be more rigorous. Indeed, how accurate is the evaluation of the required energy for a period ?}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} The evaluation of the required energy to complete a period takes into account the energy consumed for information exchange with neigbors inside a subregion and the energy needed to stay active during the sensing period. In our case, the sensing period duration is equal to one hour but might be adapted dynamically according to the QoS requirements. Thus, the threshold value $E_{th}$, which has been fixed to 36~Joules, has been computed by multiplying the energy consumed in the active state (9.72 mW) by the time in second for one period (3,600 seconds), and adding the energy for the pre-sensing phases. We explain that in subsection~5.1. In our simulation, the computation time required by a leader node to solve the integer program does not exceed 1,000~seconds regardless the size of the network and the number of subregions (see Figure~4), except the case with two subregions (DiLCO-2) where the computation time becomes much too long as the network size increases. So the energy required for computation, $E^{comp}$, estimated at 26.83~mW per second, will never exceed 26.83~Joules. All sensors whose remaining energy is greater than $E_{th}=36$ Joules are potential leaders. Once a leader is selected, it will be itself included in the coverage problem formulation only if its remaining energy before computation is greater than $E_{th}+E^{comp}$. We added a sentence in subsection~3.2 before the description of the algorithm, to clarify this point. Recall that $E^{comp}>E_{th}$ makes no sense: in such a case, the energy required for the decision phase would be greater than the energy required for the sensing phase.}}\\
+
+\noindent {\bf 6. About the information collected (line 36-38), what are they used for ?}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} These information are used for leader election and decision phases. Details on the INFO packet have been added at the end of subsection~3.2. After the information exchange among the sensor nodes in the subregion, each node will have all the information needed to decide if it will be the leader or not. The decision is based on selecting the sensor node that has the larger number of one-hop neighbors. If this value is the same for many sensors, the node that has the largest remaining energy will be selected as a leader. If there are sensors with the same number of neighbors and the same value for the remaining energy, the sensor node that has the largest index will be finally selected as a leader.}}\\
+
+\noindent {\bf 7. The way the leader is elected could emphasize first on the remaining energy. Is it sure that the remaining energy will be sufficient to solve the integer program algorithm ?}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} You are right. We have
+ answered this question in previous comments. The remaining energy for DiLCO-4, DiLCO-8, DiLCO-16, and DiLCO-32 protocol versions will be sufficient to solve the integer program algorithm (see Figure 4: execution time in seconds) in so far as the computation does not exceed 1,000~seconds. Therefore the energy required to compute $E^{comp}$, estimated to 26.83 mW per second, will never exceed 26.83 Joules. However only sensors able to be alive during one sensing period will be included in the coverage problem formulation. To sum up, a sensor may be elected as a leader only if its remaining energy is greater than $E^{comp}$, a leader may participate in the sensing phase only if its remaining energy is greater than $E_{th}+E^{comp}$. Recall that $E_{th}>E^{comp}$.}}\\
+
+\noindent {\bf 8. Regarding the MIP formulation at the end of section 4, the first constraint does not appear as a constraint for me as it is an invariant (as shown on top)}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} This constraint is essential to make the integer program consistent. Whithout this constraint, one optimal solution might be $\theta_p=0 \quad \forall p \in P$, and $U_p=0 \quad \forall p \in P$, whatever the values of $X_j$. And no real optimization is then performed.}}\\
+
+
+\noindent {\bf 9. How $w_\theta$ and $w_U$ are chosen ? (end of section 4). How dependent if the method toward these parameters ?}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} Both weights $w_\theta$ and
+ $w_U$ must be carefully chosen in order to guarantee that the
+ maximum number of points are covered during each period. In fact,
+ $w_U$ should be large enough compared to $w_{\Theta}$ to prevent
+ overcoverage and thus to activate a minimum number of sensors. We discuss this point in our answer for question~4 of reviewer~3.}}\\
+
+\noindent {\bf 10. In table 2, the ``listening" and the ``computation" status are both (ON, ON, ON), is that correct ?}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} Yes, in both cases, sensors continue their processing, communication, and sensing tasks.}}\\
+
+\noindent {\bf 11. In line 60-61, you choose active energy as reference, is that sufficient for the computation ?}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} We discuss this point in our answers for questions~5 and 7.}}\\
+
+\noindent {\bf 12. The equation of EC has the communication energy duplicated}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} In fact, there is no duplication. The first one, denoted $E^{\scriptsize \mbox{com}}_m$, represents the energy consumption spent by all the nodes for wireless communications during period $m$. The second, $E^{\scriptsize \mbox{comp}}_m$, refers to the energy needed by all the leader nodes to solve the integer program during a period.}}\\
+
+\noindent {\bf 13. Figure 2 should be discussed including the initial energy and the topology of the graph}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} Each node has an initial
+ energy level, in Joules, which is randomly drawn in
+ $[500-700]$. If its energy provision reaches a value below the
+ threshold $E_{th}$ = 36~Joules, the minimum energy needed for a
+ node to stay active during one period, it will no longer take part
+ in the coverage task. As explained previously in answer 2, we consider a highly dense network of sensors uniformly deployed in the area of interest.}}\\
+
+\noindent {\bf 14. You mention a DELL laptop. How this could be assimilated to a sensor ?}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} In fact, simulations are
+ performed on a DELL laptop. But to be consistent with the use of real sensors in practice, we multiply the execution times obtained with the DELL laptop by a constant. This is explained in subsection~5.2.3.}}\\
+
+\noindent {\bf 15. In Figure 4, what makes the execution times different ?}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} The execution times are different according to the size of the integer problem to solve. The size of the problem depends on the number of variables and constraints. The number of variables is linked to the number of alive sensors $J$, and the number of primary points $P$. Thus the integer program contains $J$ variables of type $X_j$, $P$ overcoverage variables, and $P$ undercoverage variables. The number of constraints is equal to $P$.}}\\
+
+\noindent {\bf 16. Why is it important to mention a divide-and-conquer approach (conclusion)}\\
+\textcolor{blue}{\textbf{\textsc{Answer:} It is important to mention a divide-and-conquer approach because the subdivision of the sensing field is based on this concept.}}\\
+
+\noindent {\bf 17. The connectivity among subregion should be studied too.}\\
+\textcolor{blue}{\textbf{\textsc{Answer :} Yes you are right, we will
+ investigate it more precisely in the future. At the moment, we make the assumption that the communication range $R_c$ satisfies the condition $Rc \geq 2R_s$. In fact, Zhang and Hou (``Maintaining Sensing Coverage and Connectivity in Large Sensor Networks", 2005) proved that if the transmission range fulfills the previous hypothesis, the complete coverage of a convex area implies connectivity among active nodes. Therefore, as long as the coverage ratio is greater than $95\%$, we can assume that the connectivity is maintained. We have checked this hypothesis by simulations with OMNET++.}}\\