An example of a sensor network of size 10.
All nodes are video sensors (depicted as small discs)
except the 9 one which is the sink (depicted as a rectangle).
An example of a sensor network of size 10.
All nodes are video sensors (depicted as small discs)
except the 9 one which is the sink (depicted as a rectangle).
In this one,
the nodes, in a set $N$, are sensors, links, or the sink.
Furthermore, there is an edge from $i$ to $j$ if $i$ can
In this one,
the nodes, in a set $N$, are sensors, links, or the sink.
Furthermore, there is an edge from $i$ to $j$ if $i$ can
Let thus $R_h$, $R_h \geq 0$,
be the encoding rate of video sensor $h$, $h \in V$.
Let $\eta_{hi}$ be the rate inside the node $i$
Let thus $R_h$, $R_h \geq 0$,
be the encoding rate of video sensor $h$, $h \in V$.
Let $\eta_{hi}$ be the rate inside the node $i$
$ \eta_{hi}$ is equal to $ R_h$ if $i$ is $h$,
is equal to $-R_h$ if $i$ is the sink, and $0$ otherwise.
$ \eta_{hi}$ is equal to $ R_h$ if $i$ is $h$,
is equal to $-R_h$ if $i$ is the sink, and $0$ otherwise.
$\sigma^2 e^{-\gamma . R_i.P_{si}^{}2/3}$
where $\sigma^2$ is the average input variance and
$\gamma$ is the encoding efficiency coefficient.
$\sigma^2 e^{-\gamma . R_i.P_{si}^{}2/3}$
where $\sigma^2$ is the average input variance and
$\gamma$ is the encoding efficiency coefficient.
The transmission consumed power of node $i$ is
$P_{ti} = c_l^s.y_l$ where $c_l^s$ is the transmission energy
consumption cost of link $l$, $l\in L$. This cost is defined
The transmission consumed power of node $i$ is
$P_{ti} = c_l^s.y_l$ where $c_l^s$ is the transmission energy
consumption cost of link $l$, $l\in L$. This cost is defined
$d_l$ represents the distance of the link $l$,
$\alpha$, $\beta$, and $n_p$ are constant.
The reception consumed power of node $i$ is
$d_l$ represents the distance of the link $l$,
$\alpha$, $\beta$, and $n_p$ are constant.
The reception consumed power of node $i$ is
$R_{h}$ and makes some of the functions strictly convex.
The authors then apply a classical dual based approach with Lagrange multiplier
$R_{h}$ and makes some of the functions strictly convex.
The authors then apply a classical dual based approach with Lagrange multiplier
They first introduce dual variables
$u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any
They first introduce dual variables
$u_{hi}$, $v_{h}$, $\lambda_{i}$, and $w_l$ for any
\begin{equation}
\begin{array}{l}
L(R,x,P_{s},q,u,v,\lambda,w)=\\
\begin{equation}
\begin{array}{l}
L(R,x,P_{s},q,u,v,\lambda,w)=\\
- q^{(k)}.B_i \right. \right.\\
- \qquad\qquad\qquad -\sum_{l \in L}a_{il}^{+}.c^s_l.\left( \sum_{h \in V}x_{hl}^{(k)} \right) \\
- \qquad\qquad\qquad - \left.\left. \sum_{l \in L} a_{il}^{-}.c^r.\left( \sum_{h \in V}x_{hl}^{(k)} \right) - P_{si}^{(k)} \right) \right\}
+ q^{(k)}.B_i - P_{si}^{(k)} \right. \right.\\
+ \qquad\qquad\qquad -\sum_{l \in L}a_{il}^{+}.c^s_l. \sum_{h \in V}x_{hl}^{(k)} \\
+ \qquad\qquad\qquad - \left.\left. c^r.\sum_{l \in L} a_{il}^{-}. \sum_{h \in V}x_{hl}^{(k)} \right) \right\}