\end{table*}
-This improvment has been evaluated on a set of experiments.
-For 10 tresholds $t$, such that $1E-5 \le t \le 1E-3$, we have
-executed 10 times the aproach detailled before either with the new
+This improvement has been evaluated on a set of experiments.
+For 10 thresholds $t$, such that $1E-5 \le t \le 1E-3$, we have
+executed 10 times the approach detailed before either with the new
gradient calculus or with the original argmin one.
The Table~\ref{Table:argmin:time} summarizes the averages of these
-excution times, given in seconds. We remark time spent with the gradient
+execution times, given in seconds. We remark time spent with the gradient
approach is about 37 times smaller than the one of the argmin one.
-Among implementations of argmin aproaches, we have retained
+Among implementations of argmin approaches, we have retained
the COBYLA one since it does not require any gradient to be executed.
\begin{table*}
$$
\begin{array}{|l|l|l|l|l|l|l|l|l|l|l|}
\hline
-\textrm{Convergence Treshold} &
+\textrm{Convergence Threshold} &
10^{-5} &
1.67.10^{-5} &
2.78.10^{-5} &
+ \delta_p\sum_{h \in V }P_{sh}^{\frac{8}{3}}.
\label{eq:obj2p}
\end{equation}
-In this equation we have first introduced new regularisation factors
+In this equation we have first introduced new regularization factors
(namely $\delta_x$, $\delta_r$, and $\delta_p$)
instead of the sole $\delta$.
This allows to further separately study the influence of each factor.
which is strictly convex, for any value of $\lambda_h$ since the discriminant
is positive.
-This proposed enhacement has been evaluated as follows:
-10 tresholds $t$, such that $1E-5 \le t \le 1E-3$, have
+This proposed enhancement has been evaluated as follows:
+10 thresholds $t$, such that $1E-5 \le t \le 1E-3$, have
been selected and for each of them,
10 random configurations have been generated.
For each one, we store the
number of iterations which is sufficient to make the dual
-function variation smaller than this given treshold with
+function variation smaller than this given threshold with
the two approaches: either the original one ore the
-one which is convex garantee.
+one which is convex guarantee.
The Figure~\ref{Fig:convex} summarizes the average number of convergence
-iterations for each tresholdvalue. As we can see, even if this new
+iterations for each treshold value. As we can see, even if this new
enhanced method introduces new calculus,
-it only slows few down the algorithm and garantee the convexity,
+it only slows few down the algorithm and guarantee the convexity,
and thus the convergence.
-
+Notice that the encoding power has been arbitrarily limited to 10 W.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.5]{convex.png}
\end{center}
-\caption{Original Vs Convex Garantee Approaches}\label{Fig:convex}
+\caption{Original Vs Convex Guarantee Approaches}\label{Fig:convex}
\end{figure*}