set of points called primary points~\cite{idrees2014coverage}. We assume that
the sensing disk defined by a sensor is covered if all the primary points of
this sensor are covered. By knowing the position of wireless sensor node
-(centered at the the position $\left(p_x,p_y\right)$) and it's sensing range
+(centered at the the position $\left(p_x,p_y\right)$) and its sensing range
$R_s$, we define up to 25 primary points $X_1$ to $X_{25}$ as decribed on
Figure~\ref{fig1}. The optimal number of primary points is investigated in
section~\ref{ch4:sec:04:06}.
solver returns the best solution found, which is not necessary the optimal
one. In practice, we only set time limit values for $T=5$ and $T=7$. In fact,
for $T=5$ we limited the time for 250~nodes, whereas for $T=7$ it was for the
- three largest network sizes. Therefore we used the following values (in
+ three largest network sizes. Therefore we used the following values (in
second): 0.03 for 250~nodes when $T=5$, while for $T=7$ we chose 0.03, 0.06,
- and 0.08 for respectively 150, 200, and 250~nodes.
- These time limit thresholds have been set empirically. The basic idea consists
- in considering the average execution time to solve the integer programs to
- optimality, then in dividing this average time by three to set the threshold
- value. After that, this threshold value is increased if necessary so that
- the solver is able to deliver a feasible solution within the time limit. In
- fact, selecting the optimal values for the time limits will be investigated in
- the future.}
+ and 0.08 for respectively 150, 200, and 250~nodes. These time limit
+ thresholds have been set empirically. The basic idea is to consider the
+ average execution time to solve the integer programs to optimality for 100
+ nodes and then to adjust the time linearly according to the increasing network
+ size. After that, this threshold value is increased if necessary so that the
+ solver is able to deliver a feasible solution within the time limit. In fact,
+ selecting the optimal values for the time limits will be investigated in the
+ future.}
In the following, we will make comparisons with two other methods. The first
method, called DESK and proposed by \cite{ChinhVu}, is a full distributed
phase time.
Some preliminary experiments were performed to study the choice of the number of
-subregions which subdivides the sensing field, considering different network
+subregions which subdivides the sensing field, considering different network
sizes. They show that as the number of subregions increases, so does the network
lifetime. Moreover, it makes the MuDiLCO protocol more robust against random
-network disconnection due to node failures. However, too many subdivisions
-reduce the advantage of the optimization. In fact, there is a balance between
-the benefit from the optimization and the execution time needed to solve
-it. In the following we have set the number of subregions to 16.
+network disconnection due to node failures. However, too many subdivisions
+reduce the advantage of the optimization. In fact, there is a balance between
+the benefit from the optimization and the execution time needed to solve it. In
+the following we have set the number of subregions to 16.
\subsection{Energy model}
the area of interest for a larger number of rounds. It also means that MuDiLCO
saves more energy, with less dead nodes, at most for several rounds, and thus
should extend the network lifetime. \textcolor{blue}{MuDiLCO-7 seems to have
- most of the time the best coverage ratio up to round~80, after MuDiLCO-5 is
+ most of the time the best coverage ratio up to round~80, after that MuDiLCO-5 is
slightly better.}
\begin{figure}[ht!]
\textcolor{blue}{Energy consumption increases with the size of the networks and
the number of rounds. The curve Unlimited-MuDiLCO-7 shows that energy
- consumption due to the time spent to solve the integer program to optimality
+ consumption due to the time spent to optimally solve the integer program
increases drastically with the size of the network. When the resolution time
is limited for large network sizes, the energy consumption remains of the same
order whatever the MuDiLCO version. As can be seen with MuDiLCO-7.}
become quickly unsuitable for a sensor node, especially when the sensor
network size increases as demonstrated by Unlimited-MuDiLCO-7. Notice that
for 250 nodes, we also limited the execution time for $T=5$, otherwise the
- execution time would have been above MuDiLCO-7. On the one hand, a large
+ execution time, denoted by Unlimited-MuDiLCO-5, is also above MuDiLCO-7. On the one hand, a large
value for $T$ permits to reduce the energy-overhead due to the three
pre-sensing phases, on the other hand a leader node may waste a considerable
amount of energy to solve the optimization problem. Thus, limiting the time
deactivated and thus save energy. Compared to the other approaches, our MuDiLCO
protocol maximizes the lifetime of the network. In particular the gain in
lifetime for a coverage over 95\%, and a network of 250~nodes, is greater than
-38\% when switching from GAF to MuDiLCO-5.
+43\% when switching from GAF to MuDiLCO-5.
%The lower performance that can be observed for MuDiLCO-7 in case
%of $Lifetime_{95}$ with large wireless sensor networks results from the
%difficulty of the optimization problem to be solved by the integer program.
rather than limiting the execution time, similar results might be obtained by
replacing the computation of the exact solution with the finding of a
suboptimal one using a heuristic approach. For our simulation setup and
- considering the different metrics, MuDiLCO-5 seems to be the most suited
- method in comparison with MuDiLCO-7.}
+ considering the different metrics, MuDiLCO-5 seems to be the best suited
+ method compared to MuDiLCO-7.}
\begin{figure}[t!]
\centering