-Let us first the basic recalls of the~\cite{HLG09} article.
-
-
-The precise the context of video sensor network as represeted for instance
-in figure~\ref{fig:sn}.
-
\begin{figure}
\begin{center}
-\includegraphics[scale=0.5]{reseau.png}
+\includegraphics[scale=0.3]{reseau.png}
+
+\begin{scriptsize}
+An example of a sensor network ofsize 10. All nodes are video sensor
+except the 5 and the 9 one which is the sink.
+\JFC{reprendre la figure, trouver un autre titre}
+\end{scriptsize}
+
\caption{SN with 10 sensor}\label{fig:sn}.
\end{center}
\end{figure}
-
-Let us give a formalisation of such a wideo network sensor.
-We start with the flow formalising:
-
-The video sensor network is represented as a strongly
-connected oriented labelled graph.
+Let us first recall the basics of the~\cite{HLG09} article.
+The video sensor network is memorized as a connected non oriented
+oriented labelled graph.
In this one,
-the nodes, in a set $N$ are sensors, links, or the sink.
-Furtheremore, there is an edge from $i$ to $j$ if $i$ can
-send a mesage to $j$. The set of all edges is further denoted as
-$L$ .
-This boolean information is stored as a
+the nodes, in a set $N$, are sensors, links, or the sink.
+Furthermore, there is an edge from $i$ to $j$ if $i$ can
+send a message to $j$. The set of all edges is further denoted as
+$L$.
+Figure~\ref{fig:sn} gives an example of such a network.
+
+This link information is stored as a
matrix $A=(a_{il})_{i \in N, l \in L}$,
where
-$a_{il} =
-\left\{
- \begin{array}{rl}
- 1 & \textrm{if $l$ starts with $i$ } \\
- -1 & \textrm{si $l$ ends width $i$ } \\
- 0 & \textrm{otherwise}
- \end{array}
- \right.$.
+$a_{il}$ is 1 if $l$ starts with $i$, is -1 if $l$ ends width $i$
+and 0 otherwise.
Let $V \subset N $ be the set of the video sensors of $N$.
-Let thus $R_h$, $R_h \geq 0$ be the encoding rate of video sensor $h$, $h \in V$.
-Let $\eta_{hi}$ be the production rate of the $i$ node, for the $h$ session. More precisely, we have
- $$
-\eta_{hi} =
-\left\{
- \begin{array}{rl}
- R_h & \textrm{si $i$ est $h$} \\
- -R_h & \textrm{si $i$ est le puits} \\
- 0 & \textrm{sinon}
- \end{array}
- \right.$$
+Let thus $R_h$, $R_h \geq 0$,
+be the encoding rate of video sensor $h$, $h \in V$.
+Let $\eta_{hi}$ be the production rate of the node $i$,
+for the session initiated by $h$. More precisely, we have
+$ \eta_{hi}$ is equal to $ R_h$ if $i$ is $h$,
+is equal to $-R_h$ if $i$ is the sink, and $0$ otherwise.
We are then left to focus on the flows in this network.
Let $x_{hl}$, $x_{hl}\geq 0$, be the flow inside the edge $l$ that
-issued from the $h$ sesssion and
+issued from the $h$ session and
let $y_l = \sum_{h \in V}x_{hl} $ the sum of all the flows inside $l$.
Thus, what is produced inside the $i^{th}$ sensor for session $h$
is $ \eta_{hi} = \sum_{l \in L }a_{il}x_{hl} $.
The initial energy of the $i$ node is $B_i$.
-The overall consumed powed of the $i$ node is
+The overall consumed power of the $i$ node is
$P_{si}+ P_{ti} + P_{ri}=
P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.y_l +
\sum_{l \in L} a_{il}^{-}.c^r.y_l \leq q.B_i.
\item $P_{sh} > 0,\forall h \in V$
\end{enumerate}
-
-To achieve a local optimisation, the problem is translated into an
-equivalent one:
-
-\begin{itemize}
-The objective is thus to find $R$, $x$, $P_s$ which minimize
+To achieve this optimizing goal
+a local optimisation, the problem is translated into an
+equivalent one: find $R$, $x$, $P_s$ which minimize
$\sum_{i \in N }q_i^2$ with the same set of constraints, but
item \ref{itm:q}, which is replaced by:
+$$
+\begin{array}{l}
+P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl} \right) \\
+\qquad +
+ \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl} \right) \leq q.B_i, \forall i \in N
+\end{array}
+$$
+
+
+They thus replace the objective of reducing
+$\sum_{i \in N }q_i^2$
+by the objective of reducing
+\begin{equation}
+\sum_{i \in N }q_i^2 +
+\sum_{h \in V, l \in L } \delta.x_{hl}^2
++ \sum_{h \in V }\delta.R_{h}^2
+\label{eq:obj2}
+\end{equation}
+where $\delta$ is a regularisation factor.
+This indeed introduces quadratic fonctions on variables $x_{hl}$ and
+$R_{h}$ and makes some of the functions strictly convex.
+
+The authors then apply a classical dual based approach with Lagrange multiplier
+to solve such a problem~\cite{}.
+They first introduce dual variables
+$u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any
+$h \in V$,$ i \in N$, and $l \in L$.
+
+\begin{equation}
+\begin{array}{l}
+L(R,x,P_{s},q,u,v,\lambda,w)=\\
+\sum_{i \in N} q_i^2 + q_i. \left(
+\sum_{l \in L } a_{il}w_l^{(k)}-
+\lambda_iB_i
+\right)\\
++ \sum_{h \in V}
+v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma P_{sh} ^{2/3}} + \lambda_h P_{sh}\\
++ \sum_{h \in V} \sum_{l\in L}
+\left(
+\delta.x_{hl}^2 \right.\\
+\qquad \qquad + x_{hl}.
+\sum_{i \in N} \left(
+\lambda_{i}.(c^s_l.a_{il}^{+} +
+c^r. a_{il}^{-} ) \right.\\
+\qquad \qquad\qquad \qquad +
+\left.\left. u_{hi} a_{il}
+\right)
+\right)\\
+ + \sum_{h \in V}
+\delta R_{h}^2
+-v_h.R_{h} - \sum_{i \in N} u_{hi}\eta_{hi}
+\end{array}
+\end{equation}
+
+The proposed algorithm iteratively computes the following variables
+untill the variation of the dual function is less than a given threshold.
+\begin{enumerate}
+\item $ u_{hi}^{(k+1)} = u_{hi}^{(k)} - \theta^{(k)}. \left(
+ \eta_{hi}^{(k)} - \sum_{l \in L }a_{il}x_{hl}^{(k)} \right) $
+\item
+$v_{h}^{(k+1)}= \max\left\{0,v_{h}^{(k)} - \theta^{(k)}.\left( R_h^{(k)} - \dfrac{\ln(\sigma^2/D_h)}{\gamma.(P_{sh}^{(k)})^{2/3}} \right)\right\}$
+\item
+ $\begin{array}{l}
+ \lambda_{i}^{(k+1)} = \max\left\{0, \lambda_{i}^{(k)} - \theta^{(k)}.\left(
+ q^{(k)}.B_i \right. \left.\\
+ \qquad\qquad\qquad -\sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl}^{(k)} \right) \\
+ \qquad\qquad\qquad - \left.\left. \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl}^{(k)} \right) - P_{si}^{(k)} \right) \right\}
+\end{array}
+$
-$$P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl} \right) +
-\sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl} \right) \leq q.B_i, \forall i \in N$$
-
-\JFC{Vérifier l'inéquation précédente}
+\item
+$w_l^{(k+1)} = w_l^{(k+1)} + \theta^{(k)}. \left( \sum_{i \in N} a_{il}.q_i^{(k)} \right)$
-The authors then aplly a dual based approach with Lagrange multiplier
-to solve such a problem.
+\item
+$\theta^{(k)} = \omega / t^{1/2}$
+ \item
+$q_i^{(k)} = \arg\min_{q_i>0}
+\left(
+q_i^2 + q_i.
+\left(
+\sum_{l \in L } a_{il}w_l^{(k)}-
+\lambda_i^{(k)}B_i
+\right)
+\right)$
+\item \label{item:psh}
+$
+P_{sh}^{(k)}
+=
+\arg \min_{p > 0}
+\left(
+v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p ^{2/3}} + \lambda_h^{(k)}p
+\right)
+$
-\inputFrameb{Formulation simplifiée}{formalisationsimplifiee}
\ No newline at end of file
+\item
+$
+R_h^{(k)}
+=
+\arg \min_{r \geq 0 }
+\left(
+\delta r^2
+-v_h^{(k)}.r - \sum_{i \in N} u_{hi}^{(k)} \eta_{hi}
+\right)
+$
+\item
+$
+x_{hl}^{(k)} =
+\begin{array}{l}
+\arg \min_{x \geq 0}
+\left(
+\delta.x^2 \right.\\
+\qquad \qquad + x.
+\sum_{i \in N} \left(
+\lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} +
+c^r. a_{il}^{-} ) \right.\\
+\qquad \qquad\qquad \qquad +
+\left.\left. u_{hi}^{(k)} a_{il}
+\right)
+\right)
+\end{array}
+$
+\end{enumerate}
+where the first four elements are dual variable and the last four ones are
+primal ones.
\ No newline at end of file