+ \delta_p\sum_{h \in V }P_{sh}^{\frac{8}{3}}.
\label{eq:obj2p}
\end{equation}
-In this equation we have first introduced new regularisation factors
+In this equation we have first introduced new regularization factors
(namely $\delta_x$, $\delta_r$, and $\delta_p$)
instead of the sole $\delta$.
This allows to further separately study the influence of each factor.
which is strictly convex, for any value of $\lambda_h$ since the discriminant
is positive.
-This proposed enhacement has been evaluated as follows:
-10 tresholds $t$, such that $1E-5 \le t \le 1E-3$, have
+This proposed enhancement has been evaluated as follows:
+10 thresholds $t$, such that $1E-5 \le t \le 1E-3$, have
been selected and for each of them,
10 random configurations have been generated.
For each one, we store the
number of iterations which is sufficient to make the dual
-function variation smaller than this given treshold with
+function variation smaller than this given threshold with
the two approaches: either the original one ore the
-one which is convex garantee.
+one which is convex guarantee.
The Figure~\ref{Fig:convex} summarizes the average number of convergence
-iterations for each tresholdvalue. As we can see, even if this new
+iterations for each treshold value. As we can see, even if this new
enhanced method introduces new calculus,
-it only slows few down the algorithm and garantee the convexity,
+it only slows few down the algorithm and guarantee the convexity,
and thus the convergence.
-
+Notice that the encoding power has been arbitrarily limited to 10 W.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.5]{convex.png}
\end{center}
-\caption{Original Vs Convex Garantee Approaches}\label{Fig:convex}
+\caption{Original Vs Convex Guarantee Approaches}\label{Fig:convex}
\end{figure*}