X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/desynchronisation-controle.git/blobdiff_plain/1216fb04e58a581d9a4d2e77355aaf47bf989447..refs/heads/master:/IWCMC14/convexity.tex?ds=sidebyside diff --git a/IWCMC14/convexity.tex b/IWCMC14/convexity.tex index 9ca786a..d88481c 100644 --- a/IWCMC14/convexity.tex +++ b/IWCMC14/convexity.tex @@ -8,25 +8,68 @@ P_{sh}^{(k)} v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p ^{2/3}} + \lambda_h^{(k)}p \right) $. -The function inside the $\arg \min$ is stricly convex if and only if -$\lamda_h$ is not null. This asymptotic configuration may arrise due to -the definition of $\lambda_i$. Worth, in this case, the function is -stricly decreasing and the minimal value is obtained when $p$ is the infinity. +The function inside the $\arg \min$ is strictly convex if and only if +$\lambda_h$ is not null. This asymptotic configuration may arise due to +the definition of $\lambda_h$. Worth, in this case, the function is +strictly decreasing and the minimal value is obtained when $p$ is the infinity. +Thus, the method follows its iterative calculus +with an arbitrarely large value for $P_{sh}^{(k)}$. This leads to +a convergence which is dramatically slow down. + To prevent this configuration, we replace the objective function given in equation~(\ref{eq:obj2}) by \begin{equation} \sum_{i \in N }q_i^2 + -\delta_x \sum_{h \in V, l \in L } .x_{hl}^2 -+ \delta_r\sum_{h \in V }\delta.R_{h}^2 -+ \delta_p\sum_{h \in V }\delta.P_{sh}^{\frac{8}{3}}. -\label{eq:obj2} +\delta_x \sum_{h \in V, l \in L } x_{hl}^2 ++ \delta_r\sum_{h \in V }R_{h}^2 ++ \delta_p\sum_{h \in V }P_{sh}^{\frac{8}{3}}. +\label{eq:obj2p} \end{equation} -In this equation we have first introduced new regularisation factors +In this equation we have first introduced new regularization factors (namely $\delta_x$, $\delta_r$, and $\delta_p$) instead of the sole $\delta$. -This allows to further study the influence of each modification separately. -Next, the introduction of the rationnal exponent is motivated by the goal of -providing a stricly convex function. +This allows to further separately study the influence of each factor. +Next, the introduction of the rational exponent is motivated by the goal of +providing a strictly convex function. + +Let us now verify that the induced function is convex. +Let $f: \R^{+*} \rightarrow \R$ such that $ +f(p)= v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{2/3}} + \lambda_h p ++ \delta_p p^{8/3}$. This function is differentiable and +for any $x \in \R^{+*}$ and we have +$$ +\begin{array}{rcl} +f'(p) &=& -2/3.v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{5/3}} + \lambda_h + +8/3.\delta_p p^{5/3} \\ +& = & \dfrac {8/3.\delta_p p^{10/3} + \lambda_h p^{5/3} -2/3\gamma.v_h\ln(\sigma^2/D_h) }{p^{5/3}} +\end{array} +$$ +which is positive if and only if the numerator is. +Provided $p^{5/3}$ is replaced by $P$, we have a quadratic function +which is strictly convex, for any value of $\lambda_h$ since the discriminant +is positive. + +This proposed enhancement has been evaluated as follows: +10 thresholds $t$, such that $1E-5 \le t \le 1E-3$, have +been selected and for each of them, +10 random configurations have been generated. +For each one, we store the +number of iterations which is sufficient to make the dual +function variation smaller than this given threshold with +the two approaches: either the original one ore the +one which is convex guarantee. + +The Figure~\ref{Fig:convex} summarizes the average number of convergence +iterations for each treshold value. As we can see, even if this new +enhanced method introduces new calculus, +it speeds up the algorithm and guarantees the convexity, +and thus the convergence. +\begin{figure*} +\begin{center} +\includegraphics[scale=0.5]{convex.png} +\end{center} +\caption{Original Vs Convex Guarantee Approaches}\label{Fig:convex} +\end{figure*} + - \ No newline at end of file