5 We recall in this appendix some state of the
6 art information hiding schemes. One should find more details in~\cite{Fridrich:2009:SDM:1721894}.
11 YASS (\emph{Yet Another Steganographic Scheme})~\cite{DBLP:conf/ih/SolankiSM07} is a
12 steganographic approach dedicated to JPEG cover.
13 The main idea of this algorithm is to hide data
14 into $8\times 8$ randomly chosen inside $B\times B$ blocks
15 (where $B$ is greater than 8) instead of choosing standard
16 $8\times 8$ grids used by JPEG compression.
17 The self-calibration process commonly embedded into blind steganalysis schemes
18 is then confused by the approach.
19 In the paper~\cite{SSM09}, further variants of YASS have been proposed
20 simultaneously to enlarge the embedding rate and to improve the
21 randomization step of block selecting.
22 More precisely let be given a message $m$ to hide, a size $B$, $B \ge 8$,
24 The YASS algorithm follows:
26 \item computation of $m'$ which is
27 the Repeat-Accumulate error correction code of $m$
28 \item in each big block of size $B \times B$ of cover, successively:
30 \item random selection of an $8 \times 8$ block $b$ using w.r.t. a secret key.
31 \item two-dimensional DCT transformation of $b$ and normalisation of coefficient
32 w.r.t a predefined quantization table.
33 Matrix is further referred to as $b'$.
34 \item a fragment of $m'$ is embedded in some LSB of $b'$. Let $b''$ be the resulting matrix.
35 \item The matrix $b''$ is decompressed back to the spatial domain leading to a new $B \times B$ block.
42 The nsF5 algorithm~\cite{DBLP:conf/mmsec/FridrichPK07} extends the F5
43 algorithm~\cite{DBLP:conf/ih/Westfeld01}.
44 Let us first have a closer look on this latter
46 First of all, as far as we know, F5 is the first steganographic approach
47 that solves the problem of remaining unchanged a part (often the end)
49 To achieve this, a subset of all the LSB is computed thanks to a
50 pseudorandom number generator seeded with a user defined key.
51 %Such a feature is thus close to our that is based on chaos.
52 Next, this subset is split into blocks of $x$ bits.
53 The algorithm takes benefit of binary matrix embedding to
54 increase it efficiency.
55 Let us explain this embedding on a small illustrative example where
56 a part $m$ of the message has to be embedded into this $x$ LSB of pixels
57 which are respectively a 3 bits column vector and a 7 bits column vector.
58 Let then $H$ be the binary Hamming matrix
61 \begin{array}{lllllll}
62 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
63 0 & 1 & 1 & 0 & 0 & 1 & 1 \\
64 1 & 0 & 1 & 0 & 1 & 0 & 1
68 The objective is to modify $x$ to get $y$ s.t. $m = Hy$.
69 In this algebra, the sum and the product respectively correspond to
70 the exclusive \emph{or} and to the \emph{and} Boolean operators.
71 If $Hx$ is already equal to $m$, nothing has to be changed and $x$ can be sent.
72 Otherwise we consider the difference $\delta = d(m,Hx)$ which is expressed
75 \delta = \left( \begin{array}{l}
81 \textrm{ where $\delta_i$ is 0 if $m_i = Hx_i$ and 1 otherwise.}
83 Let us thus consider the $j$th column of $H$ which is equal to $\delta$.
84 We denote by $\overline{x}^j$ the vector we obtain by
85 switching the $j$th component of $x$,
86 that is, $\overline{x}^j = (x_1 , \ldots, \overline{x_j},\ldots, x_n )$.
87 It is not hard to see that if $y$ is $\overline{x}^j$, then
89 It is then possible to embed 3 bits in only 7 LSB of pixels by modifying
90 on average $1-2^3$ changes.
91 More generally, the F5 embedding efficiency should theoretically be
94 However, the event when the coefficient resulting from this LSB switch
95 becomes zero (usually referred to as \emph{shrinkage}) may occur.
96 In that case, the recipient cannot determine
97 whether the coefficient was -1, +1 and has changed to 0 due to the algorithm or
99 The F5 scheme solves this problem first by defining a LSB
100 with the following (not even) function:
104 1 - x \mod 2 \textrm{ if } x< 0 \\
105 x \mod 2 \textrm{ otherwise.}
109 An next, if the coefficient has to be changed to 0, the same bit message
110 is re-embedded in the next group of $x$ coefficient LSB.
112 The scheme nsF5 focuses on steps of Hamming coding and ad'hoc shrinkage
113 removing. It replaces them with a \emph{wet paper code} approach
114 that is based on a random binary matrix. More precisely,
115 let $D$ be a random binary matrix of size $x \times n$ without replicate nor
116 null columns: consider for instance a subset of $\{1, 2^x\}$ of cardinality $n$
117 and write them as binary numbers. The subset is generated thanks to a PRNG
118 seeded with a shared key. In this block of size $x$, one choose to embed only
119 $k$ elements of the message $m$. By abuse, the restriction of the message is again called $m$. It thus remains $x-k$ (wet) indexes/places where
120 the information shouldn't be stored. Such indexes are generated too with the
121 keyed PRNG. Let $v$ be defined by the following equation
126 This equation may be solved by Gaussian reduction or other more efficient
127 algorithms. If there is a solution, one have the list of indexes to modify
128 into the cover. The nsF5 scheme implements such a optimized algorithm
129 that is to say the LT codes.
133 Basically, the MMx algorithm~\cite{DBLP:conf/ih/KimDR06} embeds message
134 in a selected set of LSB cover coefficients using Hamming
135 codes as the F5 scheme. However,
136 instead of reducing as many as possible the number of modified elements,
137 this scheme aims at reducing the embedding impact. To achieve this it allows
138 to modify more than one element if this leads to decrease distortion.
140 Let us start again with an example with a $[7,4]$ Hamming codes, \textit{i.e},
141 let us embed 3 bits into 7 DCT coefficients, $D_1, \ldots, D_7$.
142 Without details, let $\rho_1, \ldots, \rho_7$ be the embedding impact whilst
143 modifying coefficients $D_1, \ldots, D_7$ (see~\cite{DBLP:conf/ih/KimDR06}
144 for a formal definition of $\rho$). Modifying element at index $j$
145 leads to a distortion equal to $\rho_j$. However, instead of switching
146 the value at index $j$, one should consider to find all other columns
147 of $H$, $j_1$, $j_2$ for instances, s.t. the sum of them
148 is equal to the $j$th column and to compare $\rho_j$ with
149 $\rho_{j_1} + \rho_{j_2}$. If one of these sums is less than $\rho_j$,
150 the sender has to change these coefficients instead of the $j$ one.
151 The number of searched indexes (2 for the previous example) gives the name
152 of the algorithm. For instance in MM3, one check whether the message can be
153 embedded by modifying each time 3 pixel or less.
157 The HUGO~\cite{DBLP:conf/ih/PevnyFB10} steganographic scheme is mainly designed to minimize
158 distortion caused by embedding. To achieve this, it is firstly based on an
159 image model given as SPAM~\cite{DBLP:journals/tifs/PevnyBF10}
160 features and next integrates image
161 correction to reduce much more distortion.
162 What follows discuss on these two steps.
164 The former first computes the SPAM features. Such calculi
165 synthesize the probabilities that the difference
166 between consecutive horizontal (resp. vertical, diagonal) pixels
167 belongs in a set of pixel values which are closed to the current pixel value
168 and whose radius is a parameter of the approach.
169 Thus a fisher linear discriminant method defines the radius and
170 chooses between directions (horizontal, vertical\ldots) of analyzed pixels
171 that gives the best separator for detecting embedding changes.
172 With such instantiated coefficients, HUGO can synthesize the embedding cost
173 as a function $D(X,Y)$ that evaluates distortions between $X$ and $Y$.
174 Then HUGO computes the matrices of
175 $\rho_{i,j} = \max(D(X,X^{(i,j)+})_{i,j}, D^-(X,X^{(i,j)-})_{i,j})$
176 such that $X^{(i,j)+}$ (resp. $X^{(i,j)+}$ ) is the cover image $X$ where
177 the the $(i,j)$th pixel has been increased (resp. has been decreased) of 1.
179 The order of modifying pixel is critical: HUGO surprisingly
180 modifies pixels in decreasing order of $\rho_{i,j}$.
181 Starting with $Y=X$, it increases or decreases its $(i,j)$th pixel
182 to get the minimal value of
183 $D(Y,Y^{(i,j)+})_{i,j}$ and $D^-(Y,Y^{(i,j)-})_{i,j}$.
184 The matrix $Y$ is thus updated at each round.