An example of a sensor network of size 10.
All nodes are video sensors (depicted as small discs)
except the 9 one which is the sink (depicted as a rectangle).
-Large lircles represent the maximum
+Large circles represent the maximum
transmission range which is set to 20 in a square region which is
$50 m \times 50 m$.
\end{scriptsize}
In this one,
the nodes, in a set $N$, are sensors, links, or the sink.
Furthermore, there is an edge from $i$ to $j$ if $i$ can
-send a message to $j$, \textit{i. e.}, the distance betwween
+send a message to $j$, \textit{i. e.}, the distance between
$i$ and $j$ is less than a given maximum
transmission range.
All the possible edges are stored into a sequence
Let thus $R_h$, $R_h \geq 0$,
be the encoding rate of video sensor $h$, $h \in V$.
Let $\eta_{hi}$ be the rate inside the node $i$
-of the production that has beeninitiated by $h$. More precisely, we have
+of the production that has been initiated by $h$. More precisely, we have
$ \eta_{hi}$ is equal to $ R_h$ if $i$ is $h$,
is equal to $-R_h$ if $i$ is the sink, and $0$ otherwise.
The encoding power of the $i$ node is $P_{si}$, $P_{si} > 0$.
-The emmission distortion of the $i$ node is
+The emission distortion of the $i$ node is
$\sigma^2 e^{-\gamma . R_i.P_{si}^{}2/3}$
where $\sigma^2$ is the average input variance and
$\gamma$ is the encoding efficiency coefficient.
The transmission consumed power of node $i$ is
$P_{ti} = c_l^s.y_l$ where $c_l^s$ is the transmission energy
consumption cost of link $l$, $l\in L$. This cost is defined
-as foolows: $c_l^s = \alpha +\beta.d_l^{n_p} $ where
+as follows: $c_l^s = \alpha +\beta.d_l^{n_p} $ where
$d_l$ represents the distance of the link $l$,
$\alpha$, $\beta$, and $n_p$ are constant.
The reception consumed power of node $i$ is
\end{array}
$$
and where the following constraint is added
-$$ $q_i > 0, \forall i \in N $$
+$q_i > 0, \forall i \in N$.
to solve such a problem~\cite{}.
They first introduce dual variables
$u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any
-$h \in V$,$ i \in N$, and $l \in L$.
+$h \in V$, $ i \in N$, and $l \in L$.
\begin{equation}
\begin{array}{l}
L(R,x,P_{s},q,u,v,\lambda,w)=\\
-\sum_{i \in N} q_i^2 + q_i. \left(
-\sum_{l \in L } a_{il}w_l^{(k)}-
+\sum_{i \in N} \left( q_i^2 + q_i. \left(
+\sum_{l \in L } a_{il}w_l-
\lambda_iB_i
-\right)\\
-+ \sum_{h \in V}
-v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma P_{sh} ^{2/3}} + \lambda_h P_{sh}\\
+\right)\right) \\
++ \sum_{h \in V} \left(
+v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma P_{sh} ^{2/3}} + \lambda_h P_{sh} \right)\\
+ \sum_{h \in V} \sum_{l\in L}
\left(
\delta.x_{hl}^2 \right.\\
\left.\left. u_{hi} a_{il}
\right)
\right)\\
- + \sum_{h \in V}
+ + \sum_{h \in V} \left(
\delta R_{h}^2
--v_h.R_{h} - \sum_{i \in N} u_{hi}\eta_{hi}
+-v_h.R_{h} - \sum_{i \in N} u_{hi}\eta_{hi}\right)
\end{array}
+\label{eq:dualFunction}
\end{equation}
The proposed algorithm iteratively computes the following variables
\item
$\begin{array}{l}
\lambda_{i}^{(k+1)} = \max\left\{0, \lambda_{i}^{(k)} - \theta^{(k)}.\left(
- q^{(k)}.B_i \right. \right.\\
- \qquad\qquad\qquad -\sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl}^{(k)} \right) \\
- \qquad\qquad\qquad - \left.\left. \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl}^{(k)} \right) - P_{si}^{(k)} \right) \right\}
+ q^{(k)}.B_i - P_{si}^{(k)} \right. \right.\\
+ \qquad\qquad\qquad -\sum_{l \in L}a_{il}^{+}.c^s_l. \sum_{h \in V}x_{hl}^{(k)} \\
+ \qquad\qquad\qquad - \left.\left. c^r.\sum_{l \in L} a_{il}^{-}. \sum_{h \in V}x_{hl}^{(k)} \right) \right\}
\end{array}
$
\item
-$w_l^{(k+1)} = w_l^{(k+1)} + \theta^{(k)}. \left( \sum_{i \in N} a_{il}.q_i^{(k)} \right)$
+$w_l^{(k+1)} = w_l^{(k+1)} + \theta^{(k)}. \sum_{i \in N} a_{il}.q_i^{(k)} $
\item
-$\theta^{(k)} = \omega / t^{1/2}$
+$\theta^{(k)} = \omega / k^{1/2}$
\item
-$q_i^{(k)} = \arg\min_{q_i>0}
+$q_i^{(k+1)} = \arg\min_{q_i>0}
\left(
q_i^2 + q_i.
\left(
\item \label{item:psh}
$
-P_{sh}^{(k)}
+P_{sh}^{(k+1)}
=
\arg \min_{p > 0}
\left(
\item
$
-R_h^{(k)}
+R_h^{(k+1)}
=
\arg \min_{r \geq 0 }
\left(
$
\item
$
-x_{hl}^{(k)} =
+x_{hl}^{(k+1)} =
\begin{array}{l}
\arg \min_{x \geq 0}
\left(
\end{array}
$
\end{enumerate}
-where the first four elements are dual variable and the last four ones are
-primal ones.
\ No newline at end of file
+for any $h \in V$, $i \in N$, and $l \in L$.
the truncated Newton algorithm,
the Constrained Optimization BY Linear Approximation (COBYLA) method~\cite{ANU:1770520}.
However, all these methods suffer from being iterative approaches
-and need many steps of computation to obtain an approximation
-of the minimal value. This approach is dramatic whilst the objective is to
-reduce all the computation steps.
+each iteration including many steps of computation to obtain an approximation
+of the minimal value.
+This approach is dramatic since the objective is to
+reduce all the computation steps to increase the network lifetime.
A closer look to each function that has to be minimized shows that it is
differentiable and the minimal value can be computed in only one step.
The table~\ref{table:min} presents these minimal value for each primal
variable.
+Thanks to this formal calculus, computing the
+new iterate of each primal variable only requires
+one computation step.
\begin{table*}[t]
$$
\begin{array}{|l|l|l|}
\hline
-q_i^{(k)} &
- \arg\min_{q_i>0}
+q_i^{(k+1)} &
+ \arg\min_{q>0}
\left(
q^2 + q.
\left(
\right)
\right)
&
-\max \left(\epsilon,\dfrac{\sum_{l \in L } a_{il}w_l^{(k)}-
-\lambda_i^{(k)}B_i}{2}\right) \\
+\max \left\{\epsilon,\dfrac{\sum_{l \in L } a_{il}w_l^{(k)}-
+\lambda_i^{(k)}B_i}{2}\right\} \\
\hline
-P_{sh}^{(k)}&
+P_{sh}^{(k+1)}&
\arg \min_{p > 0}
\left(
v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{2/3}} + \lambda_h^{(k)}p
+ \delta_p p^{8/3}
\right)
&
-\max \left(\epsilon,
+\max \left\{\epsilon,
\left(
\dfrac{
-\lambda_h^{(k)} + \sqrt{(\lambda_h^{(k)})^2 + \dfrac{64}{9}\alpha}
}{\frac{16}{3}\delta_p}
\right)^{\frac{3}{5}}
-\right) \\
+\right\} \\
\hline
-R_h^{(k)}
+R_h^{(k+1)}
&
\arg \min_{r \geq 0 }
\left(
\delta_r r^2
-v_h^{(k)}.r - \sum_{i \in N} u_{hi}^{(k)} \eta_{hi}
\right) &
-\max\left(0,\dfrac{v_h^{(k)}}{2\delta_r}\right)
+\max\left\{0,\dfrac{v_h^{(k)}}{2\delta_r}\right\}
\\
\hline
-x_{hl}^{(k)} &
+x_{hl}^{(k+1)} &
\begin{array}{l}
\arg \min_{x \geq 0}
\left(
\right)
\end{array}
&
-\max\left(0,\dfrac{-\sum_{i \in N} \left(
+\max\left\{0,\dfrac{-\sum_{i \in N} \left(
\lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} +
c^r. a_{il}^{-} ) + u_{hi}^{(k)} a_{il}
-\right)}{2\delta_x}\right)
+\right)}{2\delta_x}\right\}
\\
\hline
\end{array}
$$
-\caption{Expression of each optimized primal variable}
+\caption{Primal Variables: Argmin and Direct Calculus}\label{table:min}
\end{table*}
-
\ No newline at end of file
+
+This improvement
\ No newline at end of file
Let us first have a discussion on the stop criterion of the cited algorithm.
-We claim that even if the variation of the dual function is less than a given
+We claim that even if the variation of the dual function
+(recalled in equation (\ref{eq:dualFunction}))
+is less than a given
threshold, this does not ensure that the lifetime has been maximized.
Minimizing a function on a multiple domain (as the dual function)
may indeed easily fall into a local trap because some of introduced
variables may lead to uniformity of the output.
\begin{figure}
- to be continued
- \caption{Relations between dual function threshold and $q_i$ convergence}
+ \begin{center}
+ \includegraphics[scale=0.5]{amplrate.png}
+ \end{center}
+ \caption{Relations between dual function variation and convergence of all the $q_i$}
\label{fig:convergence:scatterplot}
\end{figure}
-Experiments have indeed shown that even if the dual
-function seems to be constant
-(variations between two evaluations of this one is less than $10^{-5}$)
-not all the $q_i$ have the same value.
-For instance, the Figure~\ref{fig:convergence:scatterplot} presents
-a scatter plot.
-
-The maximum amplitude rate of the sequence of $q_i$ --which is
-$\frac{\max_{i \in N} q_i} {\min_{i \in N}q_i}-1$--
+To explain this, we introduce the maximum amplitude rate $\zeta$
+of the sequence of $q$ which is defined as
+$$
+\dfrac{\max_{i \in N} \{q_i\}}
+{\min_{i \in N} \{q_i\}}-1.
+$$
+The Figure~\ref{fig:convergence:scatterplot} presents
+a scatter plot between $\zeta$, which
is represented in $y$-coordinate
- with respect to the
+with respect to the
value of the threshold for dual function that is represented in
$x$-coordinate.
-This figure shows that a very small threshold is a necessary condition, but not
-a sufficient criteria to observe convergence of $q_i$.
+
+Experiments shown that even if the dual
+function seems to be constant
+(variations between two evaluations of this one is less than $10^{-5}$)
+not all the $q_i$ have the same value, \textit{i. e.}, $\zeta$ is still large.
+For instance, even with a threshold set to $10^{-5}$ there still can be more than
+45\% of differences between two $q_i$.
+To summarize, a very small threshold is a necessary condition, but not
+a sufficient criteria to observe convergence of $q_i$.
In the following, we consider the system are $\epsilon$-stable if both
maximum amplitude rate and the dual function are less than a threshold
$\epsilon$.
$.
The function inside the $\arg \min$ is strictly convex if and only if
$\lambda_h$ is not null. This asymptotic configuration may arise due to
-the definition of $\lambda_i$. Worth, in this case, the function is
+the definition of $\lambda_h$. Worth, in this case, the function is
strictly decreasing and the minimal value is obtained when $p$ is the infinity.
To prevent this configuration, we replace the objective function given
In this equation we have first introduced new regularisation factors
(namely $\delta_x$, $\delta_r$, and $\delta_p$)
instead of the sole $\delta$.
-This allows to further study the influence of each modification separately.
+This allows to further separately study the influence of each factor.
Next, the introduction of the rational exponent is motivated by the goal of
providing a strictly convex function.
\begin{array}{rcl}
f'(p) &=& -2/3.v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{5/3}} + \lambda_h +
8/3.\delta_p p^{5/3} \\
-&& \dfrac {8/3.\delta_p p^{10/3} + \lambda_h p^{5/3} -2/3.v_h\ln(\sigma^2/D_h) }{p^{5/3}}
+& = & \dfrac {8/3\gamma.\delta_p p^{10/3} + \lambda_h p^{5/3} -2/3.v_h\ln(\sigma^2/D_h) }{p^{5/3}}
\end{array}
$$
which is positive if and only if the numerator is.
-Provided $p^{5/3}$ is replaced by $P$, we have a quadratic function which is strictly convex, for any value of $\lambda_h$.
+Provided $p^{5/3}$ is replaced by $P$, we have a quadratic function
+which is strictly convex, for any value of $\lambda_h$ since the discriminant
+is positive.
\ No newline at end of file
\newcommand{\CG}[1]{\begin{color}{blue}\textit{}\end{color}}
+\newcommand{\R}[0]{\ensuremath{\mathbb{R}}}
\author{
\subsection{A closer look to convexity}
\input{convexity}
-\JFC{On doit parler du probleme pour Ps}
\subsection{Directly retrieving the argmin}
\input{argmin}
%\bibliographystyle{compj}
\bibliographystyle{IEEEtran}
-\bibliography{forensicsVer4}
+\bibliography{abbrev,biblioand}