X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/desynchronisation-controle.git/blobdiff_plain/921131116b4d4b4dea91fb53e126383821fe30de..1216fb04e58a581d9a4d2e77355aaf47bf989447:/IWCMC14/HLG.tex?ds=inline diff --git a/IWCMC14/HLG.tex b/IWCMC14/HLG.tex index 456b369..ff12b04 100644 --- a/IWCMC14/HLG.tex +++ b/IWCMC14/HLG.tex @@ -1,52 +1,41 @@ -Let us first the basic recalls of the~\cite{HLG09} article. - - -The precise the context of video sensor network as represented for instance -in figure~\ref{fig:sn}. - \begin{figure} \begin{center} -\includegraphics[scale=0.5]{reseau.png} +\includegraphics[scale=0.3]{reseau.png} + +\begin{scriptsize} +An example of a sensor network ofsize 10. All nodes are video sensor +except the 5 and the 9 one which is the sink. +\JFC{reprendre la figure, trouver un autre titre} +\end{scriptsize} + \caption{SN with 10 sensor}\label{fig:sn}. \end{center} \end{figure} - -Let us give a formalisation of such a video network sensor. -We start with the flow formalising: - -The video sensor network is represented as a strongly -connected oriented labelled graph. +Let us first recall the basics of the~\cite{HLG09} article. +The video sensor network is memorized as a connected non oriented +oriented labelled graph. In this one, -the nodes, in a set $N$ are sensors, links, or the sink. +the nodes, in a set $N$, are sensors, links, or the sink. Furthermore, there is an edge from $i$ to $j$ if $i$ can send a message to $j$. The set of all edges is further denoted as -$L$ . -This boolean information is stored as a +$L$. +Figure~\ref{fig:sn} gives an example of such a network. + +This link information is stored as a matrix $A=(a_{il})_{i \in N, l \in L}$, where -$a_{il} = -\left\{ - \begin{array}{rl} - 1 & \textrm{if $l$ starts with $i$ } \\ - -1 & \textrm{si $l$ ends width $i$ } \\ - 0 & \textrm{otherwise} - \end{array} - \right.$. +$a_{il}$ is 1 if $l$ starts with $i$, is -1 if $l$ ends width $i$ +and 0 otherwise. Let $V \subset N $ be the set of the video sensors of $N$. -Let thus $R_h$, $R_h \geq 0$ be the encoding rate of video sensor $h$, $h \in V$. -Let $\eta_{hi}$ be the production rate of the $i$ node, for the $h$ session. More precisely, we have - $$ -\eta_{hi} = -\left\{ - \begin{array}{rl} - R_h & \textrm{if $i$ is $h$} \\ - -R_h & \textrm{if $i$ is the sink} \\ - 0 & \textrm{otherwise} - \end{array} - \right.$$ +Let thus $R_h$, $R_h \geq 0$, +be the encoding rate of video sensor $h$, $h \in V$. +Let $\eta_{hi}$ be the production rate of the node $i$, +for the session initiated by $h$. More precisely, we have +$ \eta_{hi}$ is equal to $ R_h$ if $i$ is $h$, +is equal to $-R_h$ if $i$ is the sink, and $0$ otherwise. We are then left to focus on the flows in this network. Let $x_{hl}$, $x_{hl}\geq 0$, be the flow inside the edge $l$ that @@ -81,44 +70,78 @@ The objective is thus to find $R$, $x$, $P_s$ which minimize \item $P_{sh} > 0,\forall h \in V$ \end{enumerate} - -To achieve a local optimisation, the problem is translated into an +To achieve this optimizing goal +a local optimisation, the problem is translated into an equivalent one: find $R$, $x$, $P_s$ which minimize $\sum_{i \in N }q_i^2$ with the same set of constraints, but item \ref{itm:q}, which is replaced by: - -$$P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl} \right) + -\sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl} \right) \leq q.B_i, \forall i \in N$$ +$$ +\begin{array}{l} +P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl} \right) \\ +\qquad + + \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl} \right) \leq q.B_i, \forall i \in N +\end{array} +$$ -The authors then apply a dual based approach with Lagrange multiplier -to solve such a problem. -They first introduce dual variables -$u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any -$h \in V$,$ i \in N$, and $l \in L$. -They next replace the objective of reducing $\sum_{i \in N }q_i^2$ +They thus replace the objective of reducing +$\sum_{i \in N }q_i^2$ by the objective of reducing -$$ +\begin{equation} \sum_{i \in N }q_i^2 + \sum_{h \in V, l \in L } \delta.x_{hl}^2 + \sum_{h \in V }\delta.R_{h}^2 -$$ -where $\delta$ is a \JFC{ formalisation} factor. -This allows indeed to get convex functions whose minimum value is unique. +\label{eq:obj2} +\end{equation} +where $\delta$ is a regularisation factor. +This indeed introduces quadratic fonctions on variables $x_{hl}$ and +$R_{h}$ and makes some of the functions strictly convex. + +The authors then apply a classical dual based approach with Lagrange multiplier +to solve such a problem~\cite{}. +They first introduce dual variables +$u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any +$h \in V$,$ i \in N$, and $l \in L$. -The proposed algorithm iteratively computes the following variables +\begin{equation} +\begin{array}{l} +L(R,x,P_{s},q,u,v,\lambda,w)=\\ +\sum_{i \in N} q_i^2 + q_i. \left( +\sum_{l \in L } a_{il}w_l^{(k)}- +\lambda_iB_i +\right)\\ ++ \sum_{h \in V} +v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma P_{sh} ^{2/3}} + \lambda_h P_{sh}\\ ++ \sum_{h \in V} \sum_{l\in L} +\left( +\delta.x_{hl}^2 \right.\\ +\qquad \qquad + x_{hl}. +\sum_{i \in N} \left( +\lambda_{i}.(c^s_l.a_{il}^{+} + +c^r. a_{il}^{-} ) \right.\\ +\qquad \qquad\qquad \qquad + +\left.\left. u_{hi} a_{il} +\right) +\right)\\ + + \sum_{h \in V} +\delta R_{h}^2 +-v_h.R_{h} - \sum_{i \in N} u_{hi}\eta_{hi} +\end{array} +\end{equation} +The proposed algorithm iteratively computes the following variables +untill the variation of the dual function is less than a given threshold. \begin{enumerate} \item $ u_{hi}^{(k+1)} = u_{hi}^{(k)} - \theta^{(k)}. \left( \eta_{hi}^{(k)} - \sum_{l \in L }a_{il}x_{hl}^{(k)} \right) $ \item $v_{h}^{(k+1)}= \max\left\{0,v_{h}^{(k)} - \theta^{(k)}.\left( R_h^{(k)} - \dfrac{\ln(\sigma^2/D_h)}{\gamma.(P_{sh}^{(k)})^{2/3}} \right)\right\}$ \item - $\begin{array}{rcl} - \lambda_{i}^{(k+1)} = \lambda_{i}^{(k)} - \theta^{(k)}&.&\left( - q^{(k)}.B_i - - \sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl}^{(k)} \right) \right. \\ - && - \left. \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl}^{(k)} \right) - P_{si}^{(k)} \right) + $\begin{array}{l} + \lambda_{i}^{(k+1)} = \max\left\{0, \lambda_{i}^{(k)} - \theta^{(k)}.\left( + q^{(k)}.B_i \right. \left.\\ + \qquad\qquad\qquad -\sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl}^{(k)} \right) \\ + \qquad\qquad\qquad - \left.\left. \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl}^{(k)} \right) - P_{si}^{(k)} \right) \right\} \end{array} $ @@ -132,14 +155,14 @@ $\theta^{(k)} = \omega / t^{1/2}$ \item $q_i^{(k)} = \arg\min_{q_i>0} \left( -q^2 + q. +q_i^2 + q_i. \left( \sum_{l \in L } a_{il}w_l^{(k)}- \lambda_i^{(k)}B_i \right) \right)$ -\item +\item \label{item:psh} $ P_{sh}^{(k)} = @@ -162,16 +185,20 @@ $ \item $ x_{hl}^{(k)} = +\begin{array}{l} \arg \min_{x \geq 0} \left( -\delta.x^2 + x. +\delta.x^2 \right.\\ +\qquad \qquad + x. \sum_{i \in N} \left( \lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} + -c^r. a_{il}^{-} )+ - u_{hi}^{(k)} a_{il} +c^r. a_{il}^{-} ) \right.\\ +\qquad \qquad\qquad \qquad + +\left.\left. u_{hi}^{(k)} a_{il} \right) \right) - $ +\end{array} +$ \end{enumerate} where the first four elements are dual variable and the last four ones are -primal ones \ No newline at end of file +primal ones. \ No newline at end of file