the truncated Newton algorithm,
the Constrained Optimization BY Linear Approximation (COBYLA) method~\cite{ANU:1770520}.
However, all these methods suffer from being iterative approaches
-and need many steps of computation to obtain an approximation
-of the minimal value. This approach is dramatic whilst the objective is to
-reduce all the computation steps.
+each iteration including many steps of computation to obtain an approximation
+of the minimal value.
+This approach is dramatic since the objective is to
+reduce all the computation steps to increase the network lifetime.
A closer look to each function that has to be minimized shows that it is
differentiable and the minimal value can be computed in only one step.
The table~\ref{table:min} presents these minimal value for each primal
variable.
+Thanks to this formal calculus, computing the
+new iterate of each primal variable only requires
+one computation step.
\begin{table*}[t]
$$
\begin{array}{|l|l|l|}
\hline
-q_i^{(k)} &
- \arg\min_{q_i>0}
+q_i^{(k+1)} &
+ \arg\min_{q>0}
\left(
q^2 + q.
\left(
\right)
\right)
&
-\max \left(\epsilon,\dfrac{\sum_{l \in L } a_{il}w_l^{(k)}-
-\lambda_i^{(k)}B_i}{2}\right) \\
+\max \left\{\epsilon,\dfrac{\sum_{l \in L } a_{il}w_l^{(k)}-
+\lambda_i^{(k)}B_i}{2}\right\} \\
\hline
-P_{sh}^{(k)}&
+P_{sh}^{(k+1)}&
\arg \min_{p > 0}
\left(
v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{2/3}} + \lambda_h^{(k)}p
+ \delta_p p^{8/3}
\right)
&
-\max \left(\epsilon,
+\max \left\{\epsilon,
\left(
\dfrac{
--\lambda_h^{(k)} + \sqrt{(\lambda_h^{(k)})^2 + \dfrac{64}{9}\alpha}
-}{\frac{16}{3}\delta_p}
+-3\lambda_h^{(k)} + \sqrt{(3\lambda_h^{(k)})^2 + 64\delta_p/\gamma.\ln(\sigma^2/D_h)}
+}{16\delta_p}
\right)^{\frac{3}{5}}
-\right) \\
+\right\} \\
\hline
-R_h^{(k)}
+R_h^{(k+1)}
&
\arg \min_{r \geq 0 }
\left(
\delta_r r^2
-v_h^{(k)}.r - \sum_{i \in N} u_{hi}^{(k)} \eta_{hi}
\right) &
-\max\left(0,\dfrac{v_h^{(k)}}{2\delta_r}\right)
+\max\left\{0,\dfrac{v_h^{(k)}}{2\delta_r}\right\}
\\
\hline
-x_{hl}^{(k)} &
+x_{hl}^{(k+1)} &
\begin{array}{l}
\arg \min_{x \geq 0}
\left(
\right)
\end{array}
&
-\max\left(0,\dfrac{-\sum_{i \in N} \left(
+\max\left\{0,\dfrac{-\sum_{i \in N} \left(
\lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} +
c^r. a_{il}^{-} ) + u_{hi}^{(k)} a_{il}
-\right)}{2\delta_x}\right)
+\right)}{2\delta_x}\right\}
\\
\hline
\end{array}
$$
-\caption{Expression of each optimized primal variable}
+\caption{Primal Variables: Argmin and Direct Calculus}\label{table:min}
\end{table*}
-
\ No newline at end of file
+
+This improvment has been evaluated on a set of experiments.
+For 10 tresholds $t$, such that $1E-5 \le t \le 1E-3$, we have
+executed 10 times the aproach detailled before either with the new
+gradient calculus or with the original argmin one.
+The Table~\ref{Table:argmin:time} summarizes the averages of these
+excution times, given in seconds. We remark time spent with the gradient
+approach is about 37 times smaller than the one of the argmin one.
+Among implementations of argmin aproaches, we have retained
+the COBYLA one since it does not require any gradient to be executed.
+
+\begin{table*}
+\begin{scriptsize}
+$$
+\begin{array}{|l|l|l|l|l|l|l|l|l|l|l|}
+\hline
+\textrm{Convergence Treshold} &
+10^{-5} &
+1.67.10^{-5} &
+2.78.10^{-5} &
+4.64.10^{-5} &
+7.74.10^{-5} &
+1.29.10^{-4} &
+2.15.10^{-4} &
+3.59.10^{-4} &
+5.99.10^{-4} &
+0.001 \\
+\hline
+\textrm{Gradient Calculus} &
+56.29 &
+29.17 &
+37.05 &
+6.05 &
+5.47 &
+3.82 &
+1.91 &
+2.37 &
+0.87 &
+0.65 \\
+\hline
+\textrm{Argmin Method} &
+2224.27 &
+1158.37 &
+1125.21&
+216.82&
+162.26&
+126.99&
+58.044&
+74.204&
+23.99&
+15.85\\
+\hline
+\end{array}
+$$
+\caption{Convergence Times for Gradient and Argmin Methods}\label{Table:argmin:time}
+\end{scriptsize}
+\end{table*}
+
+
\ No newline at end of file