X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/desynchronisation-controle.git/blobdiff_plain/e6cd9df3d469f7916d513610d3bc22ab055f790a..HEAD:/IWCMC14/convexity.tex?ds=sidebyside diff --git a/IWCMC14/convexity.tex b/IWCMC14/convexity.tex index 0f7bf7e..d88481c 100644 --- a/IWCMC14/convexity.tex +++ b/IWCMC14/convexity.tex @@ -12,6 +12,10 @@ The function inside the $\arg \min$ is strictly convex if and only if $\lambda_h$ is not null. This asymptotic configuration may arise due to the definition of $\lambda_h$. Worth, in this case, the function is strictly decreasing and the minimal value is obtained when $p$ is the infinity. +Thus, the method follows its iterative calculus +with an arbitrarely large value for $P_{sh}^{(k)}$. This leads to +a convergence which is dramatically slow down. + To prevent this configuration, we replace the objective function given in equation~(\ref{eq:obj2}) by @@ -20,9 +24,9 @@ in equation~(\ref{eq:obj2}) by \delta_x \sum_{h \in V, l \in L } x_{hl}^2 + \delta_r\sum_{h \in V }R_{h}^2 + \delta_p\sum_{h \in V }P_{sh}^{\frac{8}{3}}. -\label{eq:obj2} +\label{eq:obj2p} \end{equation} -In this equation we have first introduced new regularisation factors +In this equation we have first introduced new regularization factors (namely $\delta_x$, $\delta_r$, and $\delta_p$) instead of the sole $\delta$. This allows to further separately study the influence of each factor. @@ -38,7 +42,7 @@ $$ \begin{array}{rcl} f'(p) &=& -2/3.v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{5/3}} + \lambda_h + 8/3.\delta_p p^{5/3} \\ -& = & \dfrac {8/3\gamma.\delta_p p^{10/3} + \lambda_h p^{5/3} -2/3.v_h\ln(\sigma^2/D_h) }{p^{5/3}} +& = & \dfrac {8/3.\delta_p p^{10/3} + \lambda_h p^{5/3} -2/3\gamma.v_h\ln(\sigma^2/D_h) }{p^{5/3}} \end{array} $$ which is positive if and only if the numerator is. @@ -46,4 +50,26 @@ Provided $p^{5/3}$ is replaced by $P$, we have a quadratic function which is strictly convex, for any value of $\lambda_h$ since the discriminant is positive. - \ No newline at end of file +This proposed enhancement has been evaluated as follows: +10 thresholds $t$, such that $1E-5 \le t \le 1E-3$, have +been selected and for each of them, +10 random configurations have been generated. +For each one, we store the +number of iterations which is sufficient to make the dual +function variation smaller than this given threshold with +the two approaches: either the original one ore the +one which is convex guarantee. + +The Figure~\ref{Fig:convex} summarizes the average number of convergence +iterations for each treshold value. As we can see, even if this new +enhanced method introduces new calculus, +it speeds up the algorithm and guarantees the convexity, +and thus the convergence. +\begin{figure*} +\begin{center} +\includegraphics[scale=0.5]{convex.png} +\end{center} +\caption{Original Vs Convex Guarantee Approaches}\label{Fig:convex} +\end{figure*} + +