1 In the algorithm presented in the previous section,
2 the encoding power consumption is iteratively updated with
8 v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p ^{2/3}} + \lambda_h^{(k)}p
11 The function inside the $\arg \min$ is strictly convex if and only if
12 $\lambda_h$ is not null. This asymptotic configuration may arise due to
13 the definition of $\lambda_h$. Worth, in this case, the function is
14 strictly decreasing and the minimal value is obtained when $p$ is the infinity.
16 To prevent this configuration, we replace the objective function given
17 in equation~(\ref{eq:obj2}) by
19 \sum_{i \in N }q_i^2 +
20 \delta_x \sum_{h \in V, l \in L } x_{hl}^2
21 + \delta_r\sum_{h \in V }R_{h}^2
22 + \delta_p\sum_{h \in V }P_{sh}^{\frac{8}{3}}.
25 In this equation we have first introduced new regularisation factors
26 (namely $\delta_x$, $\delta_r$, and $\delta_p$)
27 instead of the sole $\delta$.
28 This allows to further separately study the influence of each factor.
29 Next, the introduction of the rational exponent is motivated by the goal of
30 providing a strictly convex function.
32 Let us now verify that the induced function is convex.
33 Let $f: \R^{+*} \rightarrow \R$ such that $
34 f(p)= v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{2/3}} + \lambda_h p
35 + \delta_p p^{8/3}$. This function is differentiable and
36 for any $x \in \R^{+*}$ and we have
39 f'(p) &=& -2/3.v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{5/3}} + \lambda_h +
40 8/3.\delta_p p^{5/3} \\
41 & = & \dfrac {8/3.\delta_p p^{10/3} + \lambda_h p^{5/3} -2/3\gamma.v_h\ln(\sigma^2/D_h) }{p^{5/3}}
44 which is positive if and only if the numerator is.
45 Provided $p^{5/3}$ is replaced by $P$, we have a quadratic function
46 which is strictly convex, for any value of $\lambda_h$ since the discriminant
49 This proposed enhacement has been evaluated as follows:
50 10 tresholds $t$, such that $1E-5 \le t \le 1E-3$, have
51 been selected and for each of them,
52 10 random configurations have been generated.
53 For each one, we store the
54 number of iterations which is sufficient to make the dual
55 function variation smaller than this given treshold with
56 the two approaches: either the original one ore the
57 one which is convex garantee.
59 The Figure~\ref{Fig:convex} summarizes the average number of convergence
60 iterations for each tresholdvalue. As we can see, even if this new
61 enhanced method introduces new calculus,
62 it only slows few down the algorithm and garantee the convexity,
63 and thus the convergence.
67 \includegraphics[scale=0.5]{convex.png}
69 \caption{Original Vs Convex Garantee Approaches}\label{Fig:convex}