1 The approach detailed previously requires to compute
2 the variable that minimizes four convex functions
3 on constraint domains, one
4 for each primal variable $q_i$, $P_{sh}$, $R_h$, and $x_{hl}$,
5 Among state of the art for this optimization step, there is
6 the L-BFGS-B algorithm~\cite{byrd1995limited},
7 the truncated Newton algorithm,
8 the Constrained Optimization BY Linear Approximation (COBYLA) method~\cite{ANU:1770520}.
9 However, all these methods suffer from being iterative approaches
10 each iteration including many steps of computation to obtain an approximation
12 This approach is dramatic since the objective is to
13 reduce all the computation steps to increase the network lifetime.
15 A closer look to each function that has to be minimized shows that it is
16 differentiable and the minimal value can be computed in only one step.
17 The table~\ref{table:min} presents these minimal value for each primal
19 Thanks to this formal calculus, computing the
20 new iterate of each primal variable only requires
25 \begin{array}{|l|l|l|}
32 \sum_{l \in L } a_{il}w_l^{(k)}-
37 \max \left\{\epsilon,\dfrac{\sum_{l \in L } a_{il}w_l^{(k)}-
38 \lambda_i^{(k)}B_i}{2}\right\} \\
43 v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{2/3}} + \lambda_h^{(k)}p
50 -3\lambda_h^{(k)} + \sqrt{(3\lambda_h^{(k)})^2 + 64\delta_p/\gamma.\ln(\sigma^2/D_h)}
60 -v_h^{(k)}.r - \sum_{i \in N} u_{hi}^{(k)} \eta_{hi}
62 \max\left\{0,\dfrac{v_h^{(k)}}{2\delta_r}\right\}
69 \delta_x.x^2 \right.\\
72 \lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} +
73 c^r. a_{il}^{-} ) \right.\\
74 \qquad \qquad\qquad \qquad +
75 \left.\left. u_{hi}^{(k)} a_{il}
80 \max\left\{0,\dfrac{-\sum_{i \in N} \left(
81 \lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} +
82 c^r. a_{il}^{-} ) + u_{hi}^{(k)} a_{il}
83 \right)}{2\delta_x}\right\}
88 \caption{Primal Variables: Argmin and Direct Calculus}\label{table:min}
92 This improvment has been evaluated on a set of experiments.
93 For 10 tresholds $t$, such that $1E-5 \le t \le 1E-3$, we have
94 executed 10 times the aproach detailled before either with the new
95 gradient calculus or with the original argmin one.
96 The Table~\ref{Table:argmin:time} summarizes the averages of these
97 excution times, given in seconds. We remark time spent with the gradient
98 approach is about 37 times smaller than the one of the argmin one.
99 Among implementations of argmin aproaches, we have retained
100 the COBYLA one since it does not require any gradient to be executed.
105 \begin{array}{|l|l|l|l|l|l|l|l|l|l|l|}
107 \textrm{Convergence Treshold} &
119 \textrm{Gradient Calculus} &
131 \textrm{Argmin Method} &
145 \caption{Convergence Times for Gradient and Argmin Methods}\label{Table:argmin:time}