4 \includegraphics[scale=0.2]{SensorNetwork.png}
7 An example of a sensor network of size 10.
8 All nodes are video sensors (depicted as small discs)
9 except the 9 one which is the sink (depicted as a rectangle).
10 Large lircles represent the maximum
11 transmission range which is set to 20 in a square region which is
14 \caption{Illustration of a Sensor Network of size 10}\label{fig:sn}.
18 Let us first recall the basics of the~\cite{HLG09} article.
19 The video sensor network is memorized as a connected non oriented
22 the nodes, in a set $N$, are sensors, links, or the sink.
23 Furthermore, there is an edge from $i$ to $j$ if $i$ can
24 send a message to $j$, \textit{i. e.}, the distance betwween
25 $i$ and $j$ is less than a given maximum
27 All the possible edges are stored into a sequence
29 Figure~\ref{fig:sn} gives an example of such a network.
31 This link information is stored as a
32 matrix $A=(a_{il})_{i \in N, l \in L}$,
34 $a_{il}$ is 1 if $l$ starts with $i$, is -1 if $l$ ends width $i$
36 Moreover, the outgoing links(resp. the incoming links) are represented
37 with the $A^+$ matrix (res. with the $A^-$ matrix), whose elements are defined:
38 $a_{il}^+$ (resp. $a_{il}^-$) is 1 if the link $l$ is an outgoing link from $i$
39 (resp an incoming link into $i$) and 0 otherwise.
41 Let $V \subset N $ be the set of the video sensors of $N$.
42 Let thus $R_h$, $R_h \geq 0$,
43 be the encoding rate of video sensor $h$, $h \in V$.
44 Let $\eta_{hi}$ be the rate inside the node $i$
45 of the production that has beeninitiated by $h$. More precisely, we have
46 $ \eta_{hi}$ is equal to $ R_h$ if $i$ is $h$,
47 is equal to $-R_h$ if $i$ is the sink, and $0$ otherwise.
49 Let us focus on the flows in this network.
50 Let $x_{hl}$, $x_{hl}\geq 0$, be the flow inside the edge $l$ that
51 issued from the node $h$ and
52 let $y_l = \sum_{h \in V}x_{hl} $ the sum of all the flows inside $l$.
53 Thus, what is produced inside the $i^{th}$ sensor for session $h$
54 is $ \eta_{hi} = \sum_{l \in L }a_{il}x_{hl} $.
57 The encoding power of the $i$ node is $P_{si}$, $P_{si} > 0$.
58 The emmission distortion of the $i$ node is
59 $\sigma^2 e^{-\gamma . R_i.P_{si}^{}2/3}$
60 where $\sigma^2$ is the average input variance and
61 $\gamma$ is the encoding efficiency coefficient.
63 is bounded by a constant value $D_h$.
64 The initial energy of the $i$ node is $B_i$.
65 The transmission consumed power of node $i$ is
66 $P_{ti} = c_l^s.y_l$ where $c_l^s$ is the transmission energy
67 consumption cost of link $l$, $l\in L$. This cost is defined
68 as foolows: $c_l^s = \alpha +\beta.d_l^{n_p} $ where
69 $d_l$ represents the distance of the link $l$,
70 $\alpha$, $\beta$, and $n_p$ are constant.
71 The reception consumed power of node $i$ is
72 $P_{ri} = c^r \sum_{l \in L } a_{il}^-.y_l$
73 where $c^r$ is a reception energy consumption cost.
74 The overall consumed power of the $i$ node is
75 $P_{si}+ P_{ti} + P_{ri}=
76 P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.y_l +
77 \sum_{l \in L} a_{il}^{-}.c^r.y_l $.
81 The objective is thus to find $R$, $x$, $P_s$ which maximizes
82 the network lifetime $T_{\textit{net}}$, or equivalently which minimizes
83 $q=1/{T_{\textit{net}}}$.
84 Let $B_i$ is the initial energy in node $i$.
85 One have the equivalent objective to find $R$, $x$, $P_s$ which minimizes
87 under the following set of constraints:
89 \item $\sum_{l \in L }a_{il}x_{hl} = \eta_{hi},\forall h \in V, \forall i \in N $
90 \item $ \sum_{h \in V}x_{hl} = y_l,\forall l \in L$
91 \item $\dfrac{\ln(\sigma^2/D_h)}{\gamma.P_{sh}^{2/3}} \leq R_h \forall h \in V$
92 \item \label{itm:q} $P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.y_l +
93 c^r.\sum_{l \in L} a_{il}^{-}.y_l \leq q.B_i, \forall i \in N$
94 \item $\sum_{i \in N} a_{il}q_i = 0, \forall l \in L$
95 \item $x_{hl}\geq0, \forall h \in V, \forall l \in L$
96 \item $R_h \geq 0, \forall h \in V$
97 \item $P_{sh} > 0,\forall h \in V$
100 To achieve this optimizing goal
101 a local optimisation, the problem is translated into an
102 equivalent one: find $R$, $x$, $P_s$ which minimize
103 $\sum_{i \in N }q_i^2$ with the same set of constraints, but
104 item \ref{itm:q}, which is replaced by:
107 P_{si}+ \sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl} \right) \\
109 \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl} \right) \leq q_i.B_i, \forall i \in N
112 and where the following constraint is added
113 $$ $q_i > 0, \forall i \in N $$
117 They thus replace the objective of reducing
118 $\sum_{i \in N }q_i^2$
119 by the objective of reducing
121 \sum_{i \in N }q_i^2 +
122 \sum_{h \in V, l \in L } \delta.x_{hl}^2
123 + \sum_{h \in V }\delta.R_{h}^2
126 where $\delta$ is a regularisation factor.
127 This indeed introduces quadratic functions on variables $x_{hl}$ and
128 $R_{h}$ and makes some of the functions strictly convex.
130 The authors then apply a classical dual based approach with Lagrange multiplier
131 to solve such a problem~\cite{}.
132 They first introduce dual variables
133 $u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any
134 $h \in V$,$ i \in N$, and $l \in L$.
138 L(R,x,P_{s},q,u,v,\lambda,w)=\\
139 \sum_{i \in N} q_i^2 + q_i. \left(
140 \sum_{l \in L } a_{il}w_l^{(k)}-
144 v_h.\dfrac{\ln(\sigma^2/D_h)}{\gamma P_{sh} ^{2/3}} + \lambda_h P_{sh}\\
145 + \sum_{h \in V} \sum_{l\in L}
147 \delta.x_{hl}^2 \right.\\
148 \qquad \qquad + x_{hl}.
149 \sum_{i \in N} \left(
150 \lambda_{i}.(c^s_l.a_{il}^{+} +
151 c^r. a_{il}^{-} ) \right.\\
152 \qquad \qquad\qquad \qquad +
153 \left.\left. u_{hi} a_{il}
158 -v_h.R_{h} - \sum_{i \in N} u_{hi}\eta_{hi}
162 The proposed algorithm iteratively computes the following variables
163 until the variation of the dual function is less than a given threshold.
165 \item $ u_{hi}^{(k+1)} = u_{hi}^{(k)} - \theta^{(k)}. \left(
166 \eta_{hi}^{(k)} - \sum_{l \in L }a_{il}x_{hl}^{(k)} \right) $
168 $v_{h}^{(k+1)}= \max\left\{0,v_{h}^{(k)} - \theta^{(k)}.\left( R_h^{(k)} - \dfrac{\ln(\sigma^2/D_h)}{\gamma.(P_{sh}^{(k)})^{2/3}} \right)\right\}$
171 \lambda_{i}^{(k+1)} = \max\left\{0, \lambda_{i}^{(k)} - \theta^{(k)}.\left(
172 q^{(k)}.B_i \right. \right.\\
173 \qquad\qquad\qquad -\sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl}^{(k)} \right) \\
174 \qquad\qquad\qquad - \left.\left. \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl}^{(k)} \right) - P_{si}^{(k)} \right) \right\}
179 $w_l^{(k+1)} = w_l^{(k+1)} + \theta^{(k)}. \left( \sum_{i \in N} a_{il}.q_i^{(k)} \right)$
183 $\theta^{(k)} = \omega / t^{1/2}$
186 $q_i^{(k)} = \arg\min_{q_i>0}
190 \sum_{l \in L } a_{il}w_l^{(k)}-
195 \item \label{item:psh}
201 v_h^{(k)}.\dfrac{\ln(\sigma^2/D_h)}{\gamma p^{2/3}} + \lambda_h^{(k)}p
209 \arg \min_{r \geq 0 }
212 -v_h^{(k)}.r - \sum_{i \in N} u_{hi}^{(k)} \eta_{hi}
223 \sum_{i \in N} \left(
224 \lambda_{i}^{(k)}.(c^s_l.a_{il}^{+} +
225 c^r. a_{il}^{-} ) \right.\\
226 \qquad \qquad\qquad \qquad +
227 \left.\left. u_{hi}^{(k)} a_{il}
233 where the first four elements are dual variable and the last four ones are