+To make this article self-contained, this section recalls
+the basis of the Syndrome Treillis Codes (STC).
Let
$x=(x_1,\ldots,x_n)$ be the $n$-bits cover vector of the image $X$,
-$m$ be the message to embed and
+$m$ be the message to embed, and
$y=(y_1,\ldots,y_n)$ be the $n$-bits stego vector.
-The usual additive embbeding impact of replacing $x$ by $y$ in $X$
+The usual additive embedding impact of replacing $x$ by $y$ in $X$
is given by a distortion function
-$D_X(x,y)= \Sigma_{i=1}^n \rho_X(i,x,y)$ where the function $\rho_X$
-expressed the cost of replacing $x_i$ by $y_i$ in $X$.
+$D_X(x,y)= \Sigma_{i=1}^n \rho_X(i,x,y)$, where the function $\rho_X$
+expresses the cost of replacing $x_i$ by $y_i$ in $X$.
Let us consider that $x$ is fixed:
this is for instance the LSBs of the image edge bits.
The objective is thus to find $y$ that minimizes $D_X(x,y)$.
Hamming embedding proposes a solution to this problem.
-Some steganographic
+This is why some steganographic
schemes~\cite{DBLP:conf/ih/Westfeld01,DBLP:conf/ih/KimDR06,DBLP:conf/mmsec/FridrichPK07} are based on this binary embedding.
Furthermore this code provides a vector $y$ s.t. $Hy$ is equal to
$m$ for a given binary matrix $H$.
Let us explain this embedding on a small illustrative example where
-$\rho_X(i,x,y)$ is identically equal to 1,
-$m$ and $x$ are respectively a 3 bits column
+$\rho_X(i,x,y)$ is equal to 1,
+whereas $m$ and $x$ are respectively a 3 bits column
vector and a 7 bits column vector.
Let then $H$ be the binary Hamming matrix
$$
In this algebra, the sum and the product respectively correspond to
the exclusive \emph{or} and to the \emph{and} Boolean operators.
If $Hx$ is already equal to $m$, nothing has to be changed and $x$ can be sent.
-Otherwise we consider the difference $\delta = d(m,Hx)$ which is expressed
+Otherwise we consider the difference $\delta = d(m,Hx)$, which is expressed
as a vector :
$$
\delta = \left( \begin{array}{l}
\right)
\textrm{ where $\delta_i$ is 0 if $m_i = Hx_i$ and 1 otherwise.}
$$
-Let us thus consider the $j$th column of $H$ which is equal to $\delta$.
-We denote by $\overline{x}^j$ the vector we obtain by
-switching the $j$th component of $x$,
+Let us thus consider the $j-$th column of $H$, which is equal to $\delta$.
+We denote by $\overline{x}^j$ the vector obtained by
+switching the $j-$th component of $x$,
that is, $\overline{x}^j = (x_1 , \ldots, \overline{x_j},\ldots, x_n )$.
It is not hard to see that if $y$ is $\overline{x}^j$, then
$m = Hy$.
-It is then possible to embed 3 bits in only 7 LSB of pixels by modifying
-1 bit at most.
-In the general case, when comunicating $n$ message bits in
-$2^n-1$ pixels needs $1-1/2^n$ average changes.
+It is then possible to embed 3 bits in only 7 LSBs of pixels by modifying
+at most 1 bit.
+In the general case, communicating $n$ message bits in
+$2^n-1$ pixels needs $1-1/2^n$ average changes.
-Unfortunately,
-
-
-for any given $H$, finding $y$ that solves $Hy=m$ and that
-that minimizes $D_X(x,y)$ has exponential complexity with respect to $n$.
-The Syndrome-Trellis Codes (STC)
-presented by Filler et al. in~\cite{DBLP:conf/mediaforensics/FillerJF10}
+Unfortunately, for any given $H$, finding $y$ that solves $Hy=m$ and
+that minimizes $D_X(x,y)$, has an exponential complexity with respect to $n$.
+The Syndrome-Trellis Codes
+presented by Filler \emph{et al.} in~\cite{DBLP:conf/mediaforensics/FillerJF10}
is a practical solution to this complexity. Thanks to this contribution,
-the solving algorithm has a linear complexity with resspect to $n$.
+the solving algorithm has a linear complexity with respect to $n$.
-First of all, Filler et al. compute the matrix $H$
+First of all, Filler \emph{et al.} compute the matrix $H$
by placing a small sub-matrix $\hat{H}$ of size $h × w$ next
-to each other and shifted down by one row.
+to each other and by shifting down by one row.
Thanks to this special form of $H$, one can represent
every solution of $m=Hy$ as a path through a trellis.
-Next, the process of finding $y$ consists of a forward and a backward part:
+Next, the process of finding $y$ consists in two stages: a forward and a backward part.
\begin{enumerate}
-\item Forward construction of the trellis that depends on $\hat{H}$, on $x$, on $m$, and on $\rho$;
-\item Backward determinization of $y$ which minimizes $D$ starting with
-the complete path with minimal weight
+\item Forward construction of the trellis that depends on $\hat{H}$, on $x$, on $m$, and on $\rho$.
+\item Backward determination of $y$ that minimizes $D$, starting with
+the complete path having the minimal weight.
\end{enumerate}
-Let us now give some details about these two parts.
+