\item $P_{sh} > 0,\forall h \in V$
\end{enumerate}
-
To achieve this optimizing goal
a local optimisation, the problem is translated into an
equivalent one: find $R$, $x$, $P_s$ which minimize
\end{array}
$$
-The authors then apply a dual based approach with Lagrange multiplier
-to solve such a problem.
-They first introduce dual variables
-$u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any
-$h \in V$,$ i \in N$, and $l \in L$.
-They next replace the objective of reducing $\sum_{i \in N }q_i^2$
+
+They thus replace the objective of reducing
+$\sum_{i \in N }q_i^2$
by the objective of reducing
-$$
+\begin{equation}
\sum_{i \in N }q_i^2 +
\sum_{h \in V, l \in L } \delta.x_{hl}^2
+ \sum_{h \in V }\delta.R_{h}^2
-$$
-where $\delta$ is a \JFC{ formalisation} factor.
-This allows indeed to get convex functions whose minimum value is unique.
+\label{eq:obj2}
+\end{equation}
+where $\delta$ is a regularisation factor.
+This indeed introduces quadratic fonctions on variables $x_{hl}$ and
+$R_{h}$ and makes some of the functions strictly convex.
+
+The authors then apply a classical dual based approach with Lagrange multiplier
+to solve such a problem~\cite{}.
+They first introduce dual variables
+$u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any
+$h \in V$,$ i \in N$, and $l \in L$.
-The proposed algorithm iteratively computes the following variables
+\begin{equation}
+\begin{array}{l}
+L(R,x,P_{s},q,u,v,\lambda,w)=\\
+\sum_{i \in N} q_i^2 + q_i. \left(
+\sum_{l \in L } a_{il}w_l^{(k)}-
+\lambda_iB_i
+\right)\\
++ \sum_{h \in V}
+v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma P_{sh} ^{2/3}} + \lambda_h P_{sh}\\
++ \sum_{h \in V} \sum_{l\in L}
+\left(
+\delta.x_{hl}^2 \right.\\
+\qquad \qquad + x_{hl}.
+\sum_{i \in N} \left(
+\lambda_{i}.(c^s_l.a_{il}^{+} +
+c^r. a_{il}^{-} ) \right.\\
+\qquad \qquad\qquad \qquad +
+\left.\left. u_{hi} a_{il}
+\right)
+\right)\\
+ + \sum_{h \in V}
+\delta R_{h}^2
+-v_h.R_{h} - \sum_{i \in N} u_{hi}\eta_{hi}
+\end{array}
+\end{equation}
+The proposed algorithm iteratively computes the following variables
+untill the variation of the dual function is less than a given threshold.
\begin{enumerate}
\item $ u_{hi}^{(k+1)} = u_{hi}^{(k)} - \theta^{(k)}. \left(
\eta_{hi}^{(k)} - \sum_{l \in L }a_{il}x_{hl}^{(k)} \right) $
\item
$q_i^{(k)} = \arg\min_{q_i>0}
\left(
-q^2 + q.
+q_i^2 + q_i.
\left(
\sum_{l \in L } a_{il}w_l^{(k)}-
\lambda_i^{(k)}B_i
$
\end{enumerate}
where the first four elements are dual variable and the last four ones are
-primal ones
\ No newline at end of file
+primal ones.
\ No newline at end of file
--- /dev/null
+The approach detailed previously requires to compute
+the variable that minimizes four convex fonctions
+on constraint domains, one
+for each primal variable $q_i$, $P_{sh}$, $R_h$, and $x_{hl}$,
+Among state of the art for this optimization step, there is
+the L-BFGS-B algorithm~\cite{byrd1995limited},
+the truncated Newton algorithm,
+the Constrained Optimization BY Linear Approximation (COBYLA) method~\cite{ANU:1770520}.
+However, all these methods suffer from being iterative approaches
+and need many steps of computation to obtain an approximation
+of the minimal value. This approach is dramatic whilst the objective is to
+reduce all the computation steps.
+
+A closer look to each function that has to be minimized shows that it is
+differentiable and the minimal value can be computed in only one step.
+The table~\ref{table:min} presents these minimal value for each primal
+variable.
+
+\begin{table*}[t]
+$$
+\begin{array}{|l|l|l|}
+\hline
+q_i^{(k)} &
+ \arg\min_{q_i>0}
+\left(
+q^2 + q.
+\left(
+\sum_{l \in L } a_{il}w_l^{(k)}-
+\lambda_i^{(k)}B_i
+\right)
+\right)
+ &
+\max \left(\epsilon,\dfrac{\sum_{l \in L } a_{il}w_l^{(k)}-
+\lambda_i^{(k)}B_i}{2}\right) \\
+\hline
+P_{sh}^{(k)}&
+\arg \min_{p > 0}
+\left(
+v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{2/3}} + \lambda_h^{(k)}p
++ \delta_p p^{8/3}
+\right)
+&
+\max \left(\epsilon,
+\left(
+\dfrac{
+-\lambda_h^{(k)} + \sqrt{(\lambda_h^{(k)})^2 + \dfrac{64}{9}\alpha}
+}{\frac{16}{3}\delta_p}
+\right)^{\frac{3}{5}}
+\right) \\
+\hline
+R_h^{(k)}
+&
+\arg \min_{r \geq 0 }
+\left(
+\delta_r r^2
+-v_h^{(k)}.r - \sum_{i \in N} u_{hi}^{(k)} \eta_{hi}
+\right) &
+\max\left(0,\dfrac{v_h^{(k)}}{2\delta_r}\right)
+\\
+\hline
+x_{hl}^{(k)} &
+\begin{array}{l}
+\arg \min_{x \geq 0}
+\left(
+\delta_x.x^2 \right.\\
+\qquad \qquad + x.
+\sum_{i \in N} \left(
+\lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} +
+c^r. a_{il}^{-} ) \right.\\
+\qquad \qquad\qquad \qquad +
+\left.\left. u_{hi}^{(k)} a_{il}
+\right)
+\right)
+\end{array}
+&
+\max\left(0,\dfrac{-\sum_{i \in N} \left(
+\lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} +
+c^r. a_{il}^{-} ) + u_{hi}^{(k)} a_{il}
+\right)}{2\delta_x}\right)
+\\
+\hline
+\end{array}
+$$
+\caption{Expression of each optimized primal variable}
+\end{table*}
+
+
\ No newline at end of file
--- /dev/null
+Let us first have a discussion on the stop criterion of the citted algorithm.
+We claim that even if the variation of the dual function is less than a given
+threeshold, this does not ensure that the lifetime has been maximized.
+Minimizing a function on a multiple domain (as the dual function)
+may indeed easilly fall into a local trap because some of introduced
+variables may lead to uniformity of the output.
+
+\begin{figure}
+ to be continued
+ \caption{Relations between dual function threshold and $q_i$ convergence}
+ \label{fig:convergence:scatterplot}
+\end{figure}
+
+Experimentations have indeed shown that even if the dual
+function seems to be constant
+(variations between two evaluations of this one is less than $10^{-5}$)
+not all the $q_i$ have the same value.
+For instance, the Figure~\ref{fig:convergence:scatterplot} presents
+a scatter plot.
+
+The maximum amplitude rate of the sequence of $q_i$ --which is
+$\frac{\max_{i \in N} q_i} {\min_{i \in N}q_i}-1$--
+is represented in $y$-coordonates
+ with respect to the
+value of the threeshold for dual function that is represented in
+$x$-coordonates.
+This figure shows that a very small threshold is a necessary condition, but not
+a sufficient criteria to observe convergence of $q_i$.
+
+In the following, we consider the system are $\epsilon$-stable if both
+maximum amplitude rate and the dual function are less than a threeshold
+$\epsilon$.
+
+
+
\ No newline at end of file
--- /dev/null
+
\ No newline at end of file
-\section{Improving an Existing Scheme}
+\section{Basics of Existing Scheme}
\input{HLG}
+\section{Improving }
+
+
+\subsection{Convergence Detection}
+\input{convergence}
+
+
\subsection{A closer look to convexity}
+\input{convexity}
+\JFC{On doit parler du probleme pour Ps}
\subsection{Directly retreiving the argmin}
+\input{argmin}
+
+
-\subsection{Convergence Detection}
-\section{Fault-Tolerence Considerations}
+\subsection{Fault-Tolerence Considerations}
\