3 \includegraphics[scale=0.3]{reseau.png}
6 An example of a sensor network ofsize 10. All nodes are video sensor
7 except the 5 and the 9 one which is the sink.
8 \JFC{reprendre la figure, trouver un autre titre}
11 \caption{SN with 10 sensor}\label{fig:sn}.
15 Let us first recall the basics of the~\cite{HLG09} article.
16 The video sensor network is memorized as a connected non oriented
17 oriented labelled graph.
19 the nodes, in a set $N$, are sensors, links, or the sink.
20 Furthermore, there is an edge from $i$ to $j$ if $i$ can
21 send a message to $j$. The set of all edges is further denoted as
23 Figure~\ref{fig:sn} gives an example of such a network.
25 This link information is stored as a
26 matrix $A=(a_{il})_{i \in N, l \in L}$,
28 $a_{il}$ is 1 if $l$ starts with $i$, is -1 if $l$ ends width $i$
32 Let $V \subset N $ be the set of the video sensors of $N$.
33 Let thus $R_h$, $R_h \geq 0$,
34 be the encoding rate of video sensor $h$, $h \in V$.
35 Let $\eta_{hi}$ be the production rate of the node $i$,
36 for the session initiated by $h$. More precisely, we have
37 $ \eta_{hi}$ is equal to $ R_h$ if $i$ is $h$,
38 is equal to $-R_h$ if $i$ is the sink, and $0$ otherwise.
40 We are then left to focus on the flows in this network.
41 Let $x_{hl}$, $x_{hl}\geq 0$, be the flow inside the edge $l$ that
42 issued from the $h$ session and
43 let $y_l = \sum_{h \in V}x_{hl} $ the sum of all the flows inside $l$.
44 Thus, what is produced inside the $i^{th}$ sensor for session $h$
45 is $ \eta_{hi} = \sum_{l \in L }a_{il}x_{hl} $.
48 The encoding power of the $i$ node is $P_{si}$, $P_{si} > 0$.
50 The distortion is bounded $\sigma^2 e^{-\gamma . R_h.P_{sh}^{}2/3} \leq D_h$.
52 The initial energy of the $i$ node is $B_i$.
54 The overall consumed power of the $i$ node is
55 $P_{si}+ P_{ti} + P_{ri}=
56 P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.y_l +
57 \sum_{l \in L} a_{il}^{-}.c^r.y_l \leq q.B_i.
60 The objective is thus to find $R$, $x$, $P_s$ which minimize
61 $q$ under the following set of constraints
63 \item $\sum_{l \in L }a_{il}x_{hl} = \eta_{hi},\forall h \in V, \forall i \in N $
64 \item $ \sum_{h \in V}x_{hl} = y_l,\forall l \in L$
65 \item $\dfrac{\ln(\sigma^2/D_h)}{\gamma.P_{sh}^{2/3}} \leq R_h \forall h \in V$
66 \item \label{itm:q} $P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.y_l +
67 \sum_{l \in L} a_{il}^{-}.c^r.y_l \leq q.B_i, \forall i \in N$
68 \item $x_{hl}\geq0, \forall h \in V, \forall l \in L$
69 \item $R_h \geq 0, \forall h \in V$
70 \item $P_{sh} > 0,\forall h \in V$
73 To achieve this optimizing goal
74 a local optimisation, the problem is translated into an
75 equivalent one: find $R$, $x$, $P_s$ which minimize
76 $\sum_{i \in N }q_i^2$ with the same set of constraints, but
77 item \ref{itm:q}, which is replaced by:
80 P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl} \right) \\
82 \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl} \right) \leq q.B_i, \forall i \in N
87 They thus replace the objective of reducing
88 $\sum_{i \in N }q_i^2$
89 by the objective of reducing
91 \sum_{i \in N }q_i^2 +
92 \sum_{h \in V, l \in L } \delta.x_{hl}^2
93 + \sum_{h \in V }\delta.R_{h}^2
96 where $\delta$ is a regularisation factor.
97 This indeed introduces quadratic fonctions on variables $x_{hl}$ and
98 $R_{h}$ and makes some of the functions strictly convex.
100 The authors then apply a classical dual based approach with Lagrange multiplier
101 to solve such a problem~\cite{}.
102 They first introduce dual variables
103 $u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any
104 $h \in V$,$ i \in N$, and $l \in L$.
108 L(R,x,P_{s},q,u,v,\lambda,w)=\\
109 \sum_{i \in N} q_i^2 + q_i. \left(
110 \sum_{l \in L } a_{il}w_l^{(k)}-
114 v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma P_{sh} ^{2/3}} + \lambda_h P_{sh}\\
115 + \sum_{h \in V} \sum_{l\in L}
117 \delta.x_{hl}^2 \right.\\
118 \qquad \qquad + x_{hl}.
119 \sum_{i \in N} \left(
120 \lambda_{i}.(c^s_l.a_{il}^{+} +
121 c^r. a_{il}^{-} ) \right.\\
122 \qquad \qquad\qquad \qquad +
123 \left.\left. u_{hi} a_{il}
128 -v_h.R_{h} - \sum_{i \in N} u_{hi}\eta_{hi}
132 The proposed algorithm iteratively computes the following variables
133 untill the variation of the dual function is less than a given threshold.
135 \item $ u_{hi}^{(k+1)} = u_{hi}^{(k)} - \theta^{(k)}. \left(
136 \eta_{hi}^{(k)} - \sum_{l \in L }a_{il}x_{hl}^{(k)} \right) $
138 $v_{h}^{(k+1)}= \max\left\{0,v_{h}^{(k)} - \theta^{(k)}.\left( R_h^{(k)} - \dfrac{\ln(\sigma^2/D_h)}{\gamma.(P_{sh}^{(k)})^{2/3}} \right)\right\}$
141 \lambda_{i}^{(k+1)} = \max\left\{0, \lambda_{i}^{(k)} - \theta^{(k)}.\left(
142 q^{(k)}.B_i \right. \left.\\
143 \qquad\qquad\qquad -\sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl}^{(k)} \right) \\
144 \qquad\qquad\qquad - \left.\left. \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl}^{(k)} \right) - P_{si}^{(k)} \right) \right\}
149 $w_l^{(k+1)} = w_l^{(k+1)} + \theta^{(k)}. \left( \sum_{i \in N} a_{il}.q_i^{(k)} \right)$
153 $\theta^{(k)} = \omega / t^{1/2}$
156 $q_i^{(k)} = \arg\min_{q_i>0}
160 \sum_{l \in L } a_{il}w_l^{(k)}-
165 \item \label{item:psh}
171 v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p ^{2/3}} + \lambda_h^{(k)}p
179 \arg \min_{r \geq 0 }
182 -v_h^{(k)}.r - \sum_{i \in N} u_{hi}^{(k)} \eta_{hi}
193 \sum_{i \in N} \left(
194 \lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} +
195 c^r. a_{il}^{-} ) \right.\\
196 \qquad \qquad\qquad \qquad +
197 \left.\left. u_{hi}^{(k)} a_{il}
203 where the first four elements are dual variable and the last four ones are