1 In the algorithm presented in the previous section,
2 the encoding power consumption is iteratively updated with
8 v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p ^{2/3}} + \lambda_h^{(k)}p
11 The function inside the $\arg \min$ is strictly convex if and only if
12 $\lambda_h$ is not null. This asymptotic configuration may arise due to
13 the definition of $\lambda_h$. Worth, in this case, the function is
14 strictly decreasing and the minimal value is obtained when $p$ is the infinity.
15 Thus, the method follows its iterative calculus
16 with an arbitrarely large value for $P_{sh}^{(k)}$. This leads to
17 a convergence which is dramatically slow down.
20 To prevent this configuration, we replace the objective function given
21 in equation~(\ref{eq:obj2}) by
23 \sum_{i \in N }q_i^2 +
24 \delta_x \sum_{h \in V, l \in L } x_{hl}^2
25 + \delta_r\sum_{h \in V }R_{h}^2
26 + \delta_p\sum_{h \in V }P_{sh}^{\frac{8}{3}}.
29 In this equation we have first introduced new regularization factors
30 (namely $\delta_x$, $\delta_r$, and $\delta_p$)
31 instead of the sole $\delta$.
32 This allows to further separately study the influence of each factor.
33 Next, the introduction of the rational exponent is motivated by the goal of
34 providing a strictly convex function.
36 Let us now verify that the induced function is convex.
37 Let $f: \R^{+*} \rightarrow \R$ such that $
38 f(p)= v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{2/3}} + \lambda_h p
39 + \delta_p p^{8/3}$. This function is differentiable and
40 for any $x \in \R^{+*}$ and we have
43 f'(p) &=& -2/3.v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{5/3}} + \lambda_h +
44 8/3.\delta_p p^{5/3} \\
45 & = & \dfrac {8/3.\delta_p p^{10/3} + \lambda_h p^{5/3} -2/3\gamma.v_h\ln(\sigma^2/D_h) }{p^{5/3}}
48 which is positive if and only if the numerator is.
49 Provided $p^{5/3}$ is replaced by $P$, we have a quadratic function
50 which is strictly convex, for any value of $\lambda_h$ since the discriminant
53 This proposed enhancement has been evaluated as follows:
54 10 thresholds $t$, such that $1E-5 \le t \le 1E-3$, have
55 been selected and for each of them,
56 10 random configurations have been generated.
57 For each one, we store the
58 number of iterations which is sufficient to make the dual
59 function variation smaller than this given threshold with
60 the two approaches: either the original one ore the
61 one which is convex guarantee.
63 The Figure~\ref{Fig:convex} summarizes the average number of convergence
64 iterations for each treshold value. As we can see, even if this new
65 enhanced method introduces new calculus,
66 it speeds up the algorithm and guarantees the convexity,
67 and thus the convergence.
70 \includegraphics[scale=0.5]{convex.png}
72 \caption{Original Vs Convex Guarantee Approaches}\label{Fig:convex}