1 La plupart des schémas de stéganographie sont conçus de sorte à minimiser une
2 fonction de distortion. Dans les exemples du chapitre précédent,
3 ces fonctions de distortion sont construites dans l'obejctif de préserver
4 les caractéristiques de l'images.
5 On comprend aisément que dans des régions uniformes ou sur des bords clairement définis,
6 une modification même mineure de l'image est facilement détectable.
7 Au contraire dans les textures, le bruit ou les régions chaotiques
8 sont difficiles à modéliser. Les caractéristiques des images
9 dont ces zones ont été modifiées sont ainsi similaires à celles
12 Ces régions sont caractérisées par des courbes de niveau très perturbées.
13 Ce chapitre présente une nouvelle fonction de distortion pour la stéganographie
14 qui est basée sur les dérivées du second ordre, l'outil mathématique usuel
15 pour les courbes de niveau.
17 Pour peu qu'on sache définir une fonction $P$
18 qui associe à chaque pixel $(x,y)$ sa valeur $P(x,y)$,
19 les pixels tels que les dérivées secondes de $P$ ont des valeurs élevées
20 sont des bon candidats pour contenir un bit du message.
21 Cependant, une telle fonction $P$ n'est connue que de manière discrète,
22 \textit{i.e.}, en un nombre fini de points.
23 Les dérivées premières et secondes ne peuvent donc pas être évaluées mathématiquement.
24 Au mieux, on peut construire une fonction qui approxime ces $P$ sur cet ensemble
25 de pixels. Ordonner alors les pixels selon la matrice hessienne
26 (\textit{i.e.}, la matrice des dérivées secondes) n'est pas trivial puisque celle-ci
27 contient de nombreuses valeurs pour un seul pixel donné.
29 On verra dans ce chapitre comment des approximations des dérivées
30 premières et secondes pour des images numriques (Section~\ref{sec:gradient}) on peu être
32 Deux propositions de dérivées secondes sont ensuite
33 données et prouvées (Section~\ref{sec:second} et Section~\ref{sec:poly}).
34 Une adaptation d'une fonction de distortion existante est étudiée
35 en Section~\ref{sec:distortion} et des expériences sont présentées
36 en Section~\ref{sec:experiments}.
40 \section{Derivatives in an Image}\label{sec:gradient}
42 This section first recalls links between level curves, gradient, and
43 Hessian matrix (Section~\ref{sub:general}).
44 It next analyses them using kernels from signal theory
45 (Section~\ref{sub:class:1} and Section~\ref{sub:class:2}).
48 \subsection{Hessian Matrix}\label{sub:general}
49 Let us consider that an image can be seen as a numerical function $P$ that
50 associates a value $P(x,y)$ to each pixel of coordinates $(x,y)$.
51 The variations of this function in $(x_0,y_0)$
52 can be evaluated thanks to its gradient
53 $\nabla{P}$, which is the vector whose two components
54 are the partial derivatives in $x$ and in $y$ of $P$:
56 \[\nabla{P}(x_0,y_0) = \left(\frac{\partial P}{\partial x}(x_0,y_0),\frac{\partial P}{\partial y}(x_0,y_0)\right).
59 In the context of two variables, the gradient vector
60 points to the direction where the function has the highest increase.
61 Pixels with close values thus follow level curve that is orthogonal
62 to the one of highest increase.
64 The variations of the gradient vector are expressed in the
65 Hessian matrix $H$ of second-order partial derivatives of $P$.
69 \dfrac{\partial^2 P}{\partial x^2} &
70 \dfrac{\partial^2 P}{\partial x \partial y} \\
71 \dfrac{\partial^2 P}{\partial y \partial x} &
72 \dfrac{\partial^2 P}{\partial y^2} \\
79 In one pixel $(x_0,y_0)$, the larger the absolute values of this matrix are,
80 the more the gradient is varying around $(x_0,y_0)$.
81 We are then left to evaluate such an Hessian matrix.
83 This task is not as easy as it appears since natural images are not defined
84 with differentiable functions from $\R^2$ to $\R$.
85 Following subsections provide various approaches to compute these
88 \subsection{Classical Gradient Image Approaches}\label{sub:class:1}
89 In the context of image values, the most used approaches to evaluate gradient vectors are the well-known ``Sobel'', ``Prewitt'', ``Central Difference'', and ``Intermediate Difference'' ones.
93 \caption{Kernels of usual image gradient operators\label{table:kernels:usual}
97 \begin{tabular}{|c|c|c|}
99 Name& Sobel & Prewitt \\
101 Kernel & $\textit{Ks}= \begin{bmatrix} -1 & 0 & +1 \\ -2 & 0 & +2 \\ -1 & 0 & +1 \end{bmatrix} $ &
102 $\textit{Kp}= \begin{bmatrix} -1 & 0 & +1 \\ -1 & 0 & +1 \\ -1 & 0 & +1 \end{bmatrix} $\\
104 Name & Central & Intermediate \\
105 & Difference &Difference \\
107 Kernel & $\textit{Kc}= \begin{bmatrix} 0&0&0 \\ -\dfrac{1}{2} & 0 & +\dfrac{1}{2} \\ 0&0&0 \end{bmatrix} $ &
108 $\textit{Ki}= \begin{bmatrix} 0&0&0 \\ 0 & -1 & 1 \\ 0&0&0 \end{bmatrix} $ \\
114 Each of these approaches applies a convolution product $*$ between a kernel $K$ (recalled in Table~\ref{table:kernels:usual}) and
115 a $3\times 3$ window of pixel values $A$. The result
116 $A * K$ is an evaluation of the horizontal gradient,
117 \textit{i.e.}, $\dfrac{\partial P}{\partial x}$
118 expressed as a matrix in $\R$.
119 Let $K\rl$ be the result of a $\pi/2$ rotation applied on $K$.
120 The vertical gradient $\dfrac{\partial P}{\partial y}$ is similarly obtained by computing $A * K\rl$, which is again expressed as a matrix in $\R$.
122 The two elements of the first line
123 of the Hessian matrix are the result
124 of applying the horizontal gradient calculus
125 first on $\dfrac{\partial P}{\partial x}$ and next
126 on $\dfrac{\partial P}{\partial y}$.
127 Let us study these Hessian matrices in the next section.
130 \subsection{Hessian Matrices induced by Gradient Image Approaches}\label{sub:class:2}
132 First of all, it is well known that
133 $\dfrac{\partial^2 P}{\partial x \partial y} $ is equal to
134 $\dfrac{\partial^2 P}{\partial y \partial x}$ if
135 the approach that computes the gradient and the
136 one which evaluates the Hessian matrix are the same.
137 For instance, in the Sobel approach,
138 it is easy to verify that the calculus of
139 $\dfrac{\partial^2 P}{\partial x \partial y}$
141 $\dfrac{\partial^2 P}{\partial y \partial x}$
143 of a convolution product with the Kernel
144 $\textit{Ks}''_{xy}$ given in Table~\ref{table:hessian:usual}.
145 This one summarizes kernels
148 that allow to respectively compute
149 $\dfrac{\partial^2 P}{\partial x^2}$ and
150 $\dfrac{\partial^2 P}{\partial x \partial y}$ with a convolution product
151 for each of the usual image gradient operator.
154 \caption{Kernels of second order gradient operators\label{table:hessian:usual}
157 \begin{tabular}{|c|c|}
164 1 & 0 & -2 & 0 & 1 \\
165 4 & 0 & -8 & 0 & 4 \\
166 6 & 0 & -12 & 0 & 6 \\
167 4 & 0 & -8 & 0 & 4 \\
175 1 & 0 & -2 & 0 & 1 \\
176 2 & 0 & -4 & 0 & 2 \\
177 3 & 0 & -6 & 0 & 3 \\
178 2 & 0 & -4 & 0 & 2 \\
187 -1 & -2 & 0 & 2 & 1 \\
188 -2 & -4 & 0 & 4 & 2 \\
190 2 & 4 & 0 & -4 & -2 \\
198 -1 & -1 & 0 & 1 & 1 \\
199 -1 & -1 & 0 & 1 & 1 \\
201 1 & 1 & 0 & -1 & -1 \\
207 Central & Intermediate \\
208 Difference &Difference \\
215 \dfrac{1}{4} & 0 & -\dfrac{1}{2} & 0 & \dfrac{1}{4} \\
226 0 & 0 & 1 & -2 & 1 \\
236 -\dfrac{1}{4} & 0 & \dfrac{1}{4}\\
238 \dfrac{1}{4} & 0 & -\dfrac{1}{4}
255 The Sobel kernel $\textit{Ks}_{x^2}''$ allows to detect whether the central
256 pixel belongs to a ``vertical'' edge, even if this one is noisy, by considering its
257 vertical neighbours. The introduction of these vertical neighbours in this kernel is meaningful
258 in the context of finding edges, but not very accurate when the objective is to
259 precisely find the level curves of the image.
260 Moreover, all the pixels that are in the second and the fourth column in this kernel
262 The Prewitt Kernel has similar drawbacks in this context.
265 The Central Difference kernel $\textit{Kc}_{x^2}''$ is not influenced by
266 the vertical neighbours of the central pixel and is thus more accurate here.
267 However, the kernel $\textit{Kc}_{xy}''$ again looses the values of the pixels that
268 are vertically and diagonally aligned with the central one.
270 Finally, the Intermediate Difference kernel $\textit{Ki}_{x^2}''$ shifts
271 to the left the value of horizontal variations of $\dfrac{\partial P}{\partial x}$:
272 the central pixel $(0,0)$ exactly receives the value
273 $\dfrac{P(0,2)-P(0,1)}{1} - \dfrac{P(0,1)-P(0,0)}{1}$,
274 which is an approximation of
275 $\dfrac{\partial P}{\partial x}(0,1)$ and not of
276 $\dfrac{\partial P}{\partial x}(0,0)$.
277 Furthermore the Intermediate Difference kernel $\textit{Ki}_{xy}''$ only deals
278 with pixels in the upper right corner, loosing all the other information.
280 Due to these drawbacks, we are then left to produce another approach to find the level curves with strong accuracy.
282 \section{Second Order Kernels for Accurate Level Curves}\label{sec:second}
284 This step aims at finding
285 accurate level curve variations in an image.
286 We do not restrict the kernel to have a fixed size (\textit{e.g.}, $3\times3$ or $5 \times 5$ as in the
287 aforementioned schemes).
288 This step is thus defined with kernels of size
289 $(2n+1)\times (2n+1)$, $n \in \{1,2,\dots,N\}$, where
290 $N$ is a parameter of the approach.
292 The horizontal gradient variations are thus captured thanks to $(2n+1)\times (2n+1)$ square kernels
296 \def\arraystretch{1.4}
297 \def\arraystretch{1.4}
300 \begin{array}{ccccccccc}
301 0 & & & & \dots& & & & 0 \\
302 \vdots & & & & & & & & \vdots \\
303 0 & & & & \dots & & & & 0 \\
304 \dfrac{1}{2n}& 0 & \dots & 0 & -\dfrac{2}{2n} & 0 & \dots & 0& \dfrac{1}{2n} \\
305 0 & & & & \dots& & & & 0 \\
306 \vdots & & & & & & & & \vdots \\
307 0 & & & & \dots & & & & 0
313 When the convolution product is applied on a $(2n+1)\times(2n+1)$ window,
315 $\dfrac{1}{2}\left(\dfrac{P(0,n)-P(0,0)}{n} - \dfrac{P(0,0)-P(0,-n)}{n}\right)$, which is indeed
316 the variation between the gradient around the central pixel.
317 This proves that this calculus is a correct approximation of
318 $\dfrac{\partial^2 P}{\partial x^2}$.
320 When $n$ is 1, this kernel is a centered version of the horizontal Intermediate Difference kernel $\textit{Ki}_{x^2}''$ modulo a multiplication by $1/2$. When $n$ is 2, this kernel is equal to $\textit{Kc}_{x^2}''$.
322 The vertical gradient variations are again obtained by applying
323 a $\pi/2$ rotation to each horizontal kernel $\textit{Ky}_{x^2}''$.
325 The diagonal gradient variations are obtained thanks to the $(2n+1)\times (2n+1)$ square kernels
326 $\textit{Ky}_{xy}''$ defined by
331 \def\arraystretch{1.4}
332 \textit{Ky}_{xy}'' = \dfrac{1}{4}
334 \begin{array}{ccccccccc}
335 \frac{1}{n^2}& \dots & \frac{1}{2n} & \frac{1}{n}
337 -\frac{1}{n}&-\frac{1}{2n} & \dots & -\frac{1}{n^2}
351 0 & & & & \dots& & & & 0 \\
356 -\frac{1}{2n} & 0 & &
364 -\frac{1}{n^2}& \dots & -\frac{1}{2n} & -\frac{1}{n}
366 \frac{1}{n}& \frac{1}{2n} & \dots & \frac{1}{n^2}
372 When $n$ is 1, $\textit{Ky}_{xy}''$ is equal to the kernel
373 $\textit{Kc}_{xy}''$, and %.
376 the average vertical variations of the horizontal variations are
381 \left((P(0,1)-P(0,0))-(P(1,1)-P(1,0))\right)+ \right.\\
382 \quad \left((P(-1,1)-P(-1,0))-(P(0,1)-P(0,0))\right)+\\
383 \quad \left((P(0,0)-P(0,-1))-(P(1,0)-P(1,-1))\right)+\\
384 \quad \left. \left((P(-1,0)-P(-1,-1))-(P(0,0)-P(0,-1))\right)
387 \left[ P(1,-1) -P(1,1) - P(-1,-1) + P(-1,1)\right].
390 which is $\textit{Ky}_{xy}''$.
392 Let us now consider any number $n$, $1 \le n \le N$.
393 Let us first investigate the vertical variations related to
394 the horizontal vector $\vec{P_{0,0}P_{0,1}}$
395 (respectively $\vec{P_{0,-1}P_{0,0}}$)
396 of length 1 that starts from (resp. that points to) $(0,0)$.
397 As with the case $n=1$, there are 2 new vectors of
399 $\vec{P_{n,0}P_{n,1}}$ and $\vec{P_{-n,0}P_{-n,1}}$
401 $\vec{P_{n,-1}P_{n,0}}$, and $\vec{P_{-n,-1}P_{-n,0}}$)
402 that are vertically aligned with $\vec{P_{0,0}P_{0,1}}$
403 (resp. with $\vec{P_{0,-1}P_{0,0}}$).
405 The vertical variation is now equal to $n$. Following the case where $n$ is 1 to compute the average variation,
406 the coefficients of the first and last line around the central
407 vertical line are thus from left to right:
410 $\dfrac{-1}{4n}$, and
413 Cases are similar with vectors $\vec{P_{0,0}P_{0,1}}$, \ldots
414 $\vec{P_{0,0}P_{0,n}}$ which respectively lead to coefficients
415 $-\dfrac{1}{4 \times 2n}$, \ldots,
416 $-\dfrac{1}{4 \times n.n}$, and the proof is omitted.
417 Finally, let us consider the vector $\vec{P_{0,0}P_{0,1}}$
418 and its vertical variations when $\delta y$ is $n-1$.
419 As in the case where $n=1$, we thus obtain the coefficients
420 $\dfrac{1}{4 \times (n-1)n}$ and
421 $-\dfrac{1}{4 \times (n-1)n}$
422 (resp. $-\dfrac{1}{4 \times (n-1)n}$ and
423 $\dfrac{1}{4 \times (n-1)n}$)
424 in the second line (resp. in the
425 penultimate line) since the vector has length $n$
426 and $\delta y$ is $n-1$.
427 Coefficient in the other lines are similarly obtained and the proof is thus omitted.
429 We are then left to compute an approximation of the partial second order derivatives
430 $\dfrac{\partial^2 P}{\partial x^2}$, $\dfrac{\partial^2 P}{\partial y^2}$, and $\dfrac{\partial^2 P}{\partial x \partial y}$
432 $\textit{Ky}_{x^2}''$, $\textit{Ky}_{y^2}''$, and $\textit{Ky}_{xy}''$ respectively.
433 However, the size of each of these kernels is varying from $3\times3$ to $(2N+1)\times (2N+1)$.
434 Let us explain the approach on the former partial derivative.
435 The other can be immediately deduced.
437 Since the objective is to detect large variations, the second order derivative is approximated as
438 the maximum of the approximations. More formally,
439 let $n$, $1 \le n \le N$, be an integer number and
440 $\dfrac{\partial^2 P}{\partial x^2}_n$ be the result of applying the Kernel
441 $\textit{Ky}_{x^2}''$ of size $(2n+1)\times (2n+1)$. The derivative
442 $\dfrac{\partial^2 P}{\partial x^2}$ is defined by
445 \dfrac{\partial^2 P}{\partial x^2}
447 \abs{\dfrac{\partial^2 P}{\partial x^2}_1}, \dots, \abs{\dfrac{\partial^2 P}{\partial x^2}_N}
452 The same iterative approach is applied to compute approximations of
453 $\dfrac{\partial^2 P}{\partial y \partial x}$
455 $\dfrac{\partial^2 P}{\partial y^2}$.
456 Next section studies the suitability of approximating second order derivatives
457 when considering an image as a polynomial.
460 \section{Polynomial Interpolation of Images for Hessian Matrix Computation}\label{sec:poly}
461 Let $P(x,y)$ be the discrete value of the pixel $(x,y)$ in the image.
462 Let $n$, $1 \le n \le N$, be an integer such that the objective is to find a polynomial interpolation
463 on the $(2n+1)\times(2n+1)$ window where the central pixel has index $(0,0)$.
464 There exists an unique polynomial $L : \R\times \R \to \R$ of degree $(2n+1)\times(2n+1)$ defined
465 such that $L(x,y)=P(x,y)$ for each pixel $(x,y)$ in this window.
466 Such a polynomial is defined by
474 \prod_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
478 \prod_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
484 It is not hard to prove that the first order horizontal derivative of the polynomial $L(x,y)$
488 \dfrac{\partial L}{\partial x} =
493 \prod_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
498 \sum_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
500 \prod_{\stackrel{-n\leq i''\leq n}{i''\neq i,i'}}
505 \noindent and thus to deduce that the
506 second order ones are
510 \dfrac{\partial^2 L}{\partial x^2} =
515 \prod_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
520 \sum_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
522 \sum_{\stackrel{-n\leq i''\leq n}{i''\neq i,i'}}
524 \prod_{\stackrel{-n\leq i'''\leq n}{i'''\neq i,i',i''}}
525 \frac{x-i'''}{i-i'''}
528 \label{eq:deriv:poly:x2}
533 \dfrac{\partial^2 L}{\partial y \partial x} =
538 \sum_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
540 \prod_{\stackrel{-n\leq j''\leq n}{j''\neq j, j'}}
545 \sum_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
547 \prod_{\stackrel{-n\leq i''\leq n}{i''\neq i, i'}}
551 \label{eq:deriv:poly:yx}
554 These second order derivatives are computed for each moving
555 window and are associated to the central pixel, \textit{i.e.}, to the pixel $(0,0)$ inside this one.
557 Let us first simplify $\dfrac{\partial^2 L}{\partial x^2}$ when $(x,y)=(0,0)$
558 defined in Equation~(\ref{eq:deriv:poly:x2}). If $j$ is not null, the index $j'$
559 is going to be null and the product
561 \prod_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
563 \right)$ is null too.
564 In this equation, we thus only consider $j=0$.
565 It is obvious that the product indexed with $j'$ is thus equal to 1.
566 This equation can thus be simplified in:
570 \dfrac{\partial^2 L}{\partial x^2} =
575 \sum_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
577 \sum_{\stackrel{-n\leq i''\leq n}{i''\neq i,i'}}
579 \prod_{\stackrel{-n\leq i'''\leq n}{i'''\neq i,i',i''}}
583 \label{eq:deriv:poly:x2:simpl}
591 \dfrac{\partial^2 L}{\partial x^2} =
596 \sum_{\stackrel{-n\leq i' < i'' \le n}{i',i''\neq i}}
597 \frac{2}{(i-i')(i-i'')}
598 \prod_{\stackrel{-n\leq i'''\leq n}{i'''\neq i,i',i''}}
602 \label{eq:deriv:poly:x2:simpl:2}
605 From this equation, the kernel allowing to evaluate horizontal
606 second order derivatives
607 can be computed for any $n$.
608 It is further denoted as $Ko''_{x^2}$.
609 Instances of such matrix when $n=2$, $3$, and $4$
610 are given in Table~\ref{table:sod:hori:poly}.
613 \caption{Kernels $Ko''_{x^2}$
614 for second order horizontal derivatives induced
615 by polynomial interpolation}
618 \def\arraystretch{1.4}
619 \begin{tabular}{|c|c|}
621 $n$ & $Ko''_{x^2}$ \\
623 $2$ & $\left[\dfrac{-1}{12}, \dfrac{4}{3} , \dfrac{-5}{2}, \dfrac{4}{3} \dfrac{-1}{12}\right]$ \\
625 $3$ & $\left[\dfrac{1}{90}, \dfrac{-3}{20}, \dfrac{3}{2}, \dfrac{-49}{18}, \dfrac{3}{2}, \dfrac{-3}{20}, \dfrac{1}{90}\right]$ \\
627 $4$ & $\left[\dfrac{-1}{560}, \dfrac{8}{315}, \dfrac{-1}{5}, \dfrac{8}{5}, \dfrac{-205}{72}, \dfrac{8}{5}, \dfrac{-1}{5}, \dfrac{8}{315}, \dfrac{-1}{560}\right]$\\
629 %$5$ & $\left[\dfrac{1}{3150}, \dfrac{-5}{1008}, \dfrac{5}{126}, %\dfrac{-5}{21}, \dfrac{5}{3}, \dfrac{-5269}{1800}, \dfrac{5}{3}, %\dfrac{-5}{21}, \dfrac{5}{126}, \dfrac{-5}{1008}, \dfrac{1}{3150}\right]$\\
633 \label{table:sod:hori:poly}
638 \caption{Kernels for second order diagonal derivatives induced
639 by polynomial interpolation \label{table:sod:diag:poly}
643 \def\arraystretch{1.5}
644 \begin{tabular}{|c|c|}
650 \dfrac{1}{4} & 0 & \dfrac{-1}{4}\\
652 \dfrac{-1}{4} & 0 & \dfrac{1}{4}\\
659 \dfrac{1}{144} & \dfrac{-1}{18} & 0 & \dfrac{1}{18} & \dfrac{-1}{144}\\
660 \dfrac{-1}{18} & \dfrac{4}{9} & 0 & \dfrac{-4}{9} & \dfrac{1}{18}\\
662 \dfrac{1}{18} & \dfrac{-4}{9} & 0 & \dfrac{4}{9} & \dfrac{-1}{18}\\
663 \dfrac{-1}{144} & \dfrac{1}{18} & 0 & \dfrac{-1}{18} & \dfrac{1}{144}
671 From Equation~(\ref{eq:deriv:poly:yx}), kernels allowing to evaluate diagonal
672 second order derivatives (\textit{i.e.},
673 $\dfrac{\partial^2 L}{\partial y \partial x}$) are computed.
674 They are denoted as $Ko''_{xy}$.
675 Table~\ref{table:sod:diag:poly} gives two examples of them when $n=1$ and $n=2$.
676 Notice that for $n=1$, the kernel $Ko''_{xy}$ is equal to $Kc''_{xy}$.
683 \section{Distortion Cost}\label{sec:distortion}
685 The distortion function has to associate to each pixel $(i,j)$
686 the cost $\rho_{ij}$ of its modification
689 The objective is to map a small value to a pixel when all its second order derivatives
690 are high and a large value otherwise.
691 %\YF{The WOW use the wavelet relative distortion (UNIWARD) as a function to force the embedding operation to the noisy areas and avert the smooth areas.}
692 In WOW and UNIWARD the distortion function is based on the H\"older norm with
696 \abs{\xi_{ij}^h}^{p} +
697 \abs{\xi_{ij}^v}^{p} +
699 \right)^{-\frac{1}{p}}
701 where $p$ is a negative number and
702 $\xi_{ij}^h$ (resp. $\xi_{ij}^v$ and $\xi_{ij}^d$)
703 represents the horizontal (resp. vertical and diagonal) suitability.
704 A small suitability in one direction means an inaccurate position to embed a message.
706 We propose here to adapt such a distortion cost as follows:
710 \abs{\dfrac{\partial^2 P}{\partial x^2}(i,j)} +
711 \abs{\dfrac{\partial^2 P}{\partial y^2}(i,j)} +
712 \abs{\dfrac{\partial^2 P}{\partial y \partial x}(i,j)}
713 \right)^{-\frac{1}{p}}
715 It is not hard to check that such a function has large
716 value when at least one of its derivatives is null. Otherwise,
717 the larger the derivatives are, the smaller the returned value
723 %Practically, it is computed as a double convolution product with Daubechies Wavelet Kernels.
725 %It is not hard to see that this approach
726 %does not consider the range of the values. For %instance, if one direction only has large suitability
727 %absolute values, all the elements $\abs{\xi_{ij}}^{p}$ are so small, and
728 %suitable pixels cannot be distinguished from inaccurate ones. The distortion function has to
729 %provide a way to discriminate pixels even when the distribution has small deviation
731 %Because of each pixel value ranges in $[0,255]$, each first order derivative belongs to
732 %$[-255,255]$ and thus each second order derivative has a real value in
734 %Due to the definition of the approximation of $\dfrac{\partial^2 P}{\partial x^2}$,
735 %this one is a positive or null real number lower than 255.
737 %The higher the second order derivatives are,
740 %To enlarge the discrimination power,
741 %data are firstly linearly mapped to $[0,1]$.
743 %$(\dfrac{\partial^2 P}{\partial x^2} , \dfrac{\partial^2 P}{\partial y^2}, \dfrac{\partial^2 %P}{\partial x \partial y})$ belongs to $[0,1]\times[0,1]\times[0,1] = [0,1]^3$.
745 %Basically, for each dimension the idea is to select a subset of pixels of size $k$ and next to retain those which have been selected three times, \textit{i.e.}, those with large values
747 %$(\dfrac{\partial^2 P}{\partial x^2} , \dfrac{\partial^2 P}{\partial y^2}, \dfrac{\partial^2 %P}{\partial x \partial y})$.
748 %Since the dimension is $3$, the size $k$ can be defined by
750 %k = W \times H \times \alpha ^{\dfrac{1}{3}}
752 %where $W$ and $H$ respectively stand for the width and the length of the image (expressed in pixels)
753 %and $\alpha$ is the payload. For instance if $\alpha$ is 0.1,
754 %about $0.46\%$ of the image pixels with higher values in $\dfrac{\partial^2 P}{\partial x^2}$ should be selected, and similarly with $\dfrac{\partial^2 P}{\partial y^2}$ and $\dfrac{\partial^2 P}{\partial x \partial y}$.
757 % Dire comment on les ordonne, les autres et le sens
761 \begin{tabular}{|c|c|c|}
763 Scheme & Stego. content & Changes with cover \\
765 $Ky$ based approach &\includegraphics[scale=0.20]{images/38_dp}&
766 \includegraphics[scale=0.20]{images/38_dp_diff} \\
768 $Ko$ based approach & \includegraphics[scale=0.20]{images/38_poly} &
769 \includegraphics[scale=0.20]{images/38_poly_diff} \\
772 \caption{Embedding changes instance with payload $\alpha = 0.4$}
777 \section{Experiments}\label{sec:experiments}
779 First of all, the whole steganographic approach code is available online\footnote{\url{https://github.com/stego-content/SOS}}.
781 Figure~\ref{fig:oneimage} presents the results of embedding data in a cover image
782 from the BOSS contest database~\cite{Boss10} with respect to the two second order derivative schemes presented in this work.
783 The $Ky$ based approach (resp. the $Ko$ based one)
784 corresponds to the scheme detailed in Section~\ref{sec:second}
785 (resp. in Section~\ref{sec:poly}).
786 The payload $\alpha$ is set to 0.4 and kernels are computed with $N=4$.
787 The central column outputs the embedding result whereas the right one displays
788 differences between the cover image and the stego one.
789 It can be observed that pixels in smooth area (the sky, the external access steps) and pixels in clean edges (the columns,
790 the step borders) are not modified by the approach.
791 On the contrary, an unpredictable area (a monument for example) concentrates pixel changes.
794 \subsection{Choice of parameters}
796 The two methods proposed in Section~\ref{sec:second} and
797 in Section~\ref{sec:poly} are based on kernels of size up to
798 $(2N+1)\times(2N+1)$. This section aims at finding the
799 value of the $N$ parameter that maximizes the security level.
800 For each approach, we have built 1,000 stego images with
801 $N=2$, $4$, $6$, $8$, $10$, $12$, and $14$ where the covers belong
802 to the BOSS contest database.
803 This set contains 10,000 grayscale $512\times 512$ images in a RAW format.
804 The security of the approach has been evaluated thanks to the Ensemble Classifier~\cite{DBLP:journals/tifs/KodovskyFH12} based steganalyser,
805 which is considered as a state of the art steganalyser tool.
806 This steganalysis process embeds the rich model (SRM) features~\cite{DBLP:journals/tifs/FridrichK12}
808 For a payload $\alpha$, either equal to $0.1$ or to $0.4$, average testing errors (expressed in percentages) have been studied and are summarized in
809 Table~\ref{table:choice:parameter}.
812 \caption{Average Testing Errors with respect to the the Kernel Size}
815 \setlength{\tabcolsep}{3pt}
816 \begin{tabular}{|c|c|c|c|c|c|c|c|c|}
818 \multicolumn{1}{c|}{} & \multirow{2}{*}{$\alpha$} & \multicolumn{7}{c|}{$N$} \\
820 \multicolumn{1}{c|}{}& & $2$ & $4$& $6$& $8$& $10$& $12$ & $14$ \\
823 & \textit{0.1} & 39& 40.2& 39.7& 39.8& 40.1& $39.9$& $39.8$ \\
825 error for Kernel $K_y$ & \textit{0.4}& 15& 18.8& 19.1& 19.0& 18.6& 18.7 & 18.7 \\
827 Average testing & \textit{0.1} & 35.2 & 36.6& 36.7& 36.6& 37.1& 37.2 & 37.2 \\
829 error for Kernel $K_o$ & \textit{0.4} & 5.2 & 6.8& 7.5 & 7.9 & 8.1 & 8.2 & 7.6 \\
832 \label{table:choice:parameter}
836 Thanks to these experiments, we observe that the size $N=4$ (respectively $N=12$) obtains sufficiently large average testing errors for the $Ky$ based approach
837 (resp. for the $Ko$ based one). In what follows, these values are retained for these two methods.
839 \subsection{Security Evaluation}
840 As in the previous section, the BOSS contest database has been retained.
841 To achieve a complete comparison with other steganographic tools,
842 the whole database of 10,000 images has been used.
843 Ensemble Classifier with SRM features is again used to evaluate the security of the approach.
845 We have chosen 4 different payloads, 0.1, 0.2, 0.3, and 0.4, as in many steganographic evaluations.
846 Three values are systematically given for each experiment:
847 the area under the ROC curve (AUC),
848 the average testing error (ATE),
849 and the OOB error (OOB).
851 All the results are summarized in Table~\ref{table:experiments:summary}.
852 %and in Figure~\ref{fig1}, Figure~\ref{fig2}, and Figure~\ref{fig3}.
853 Let us analyse these experimental results.
854 The security approach is often lower than those observed with state of the art tools:
855 for instance with payload $\alpha=0.1$, the most secure approach is WOW
856 with an average testing error equal to 0.43 whereas our approach reaches 0.38.
857 However these results are promising and for two reasons.
858 First, our approaches give more resistance towards Ensemble Classifier (contrary to HUGO)
860 Secondly, without any optimisation, our approach is not so far from state of the art steganographic tools.
861 Finally, we explain the lack of security of the $Ko$ based approach with large payloads as follows:
862 second order derivatives are indeed directly extracted from polynomial interpolation.
863 This easy construction however induces large variations between the polynomial $L$ and
864 the pixel function $P$.
867 \caption{Summary of experiments}\label{table:experiments:summary}
870 \begin{tabular}{|l|l|l|l|l|}
873 & Payload & AUC & ATE & OOB \\ \hline
875 & 0.1 & 0.6501 & 0.4304 & 0.3974\\
876 & 0.2 & 0.7583 & 0.3613 & 0.3169\\
877 & 0.3 & 0.8355 & 0.2982 & 0.2488\\
878 & 0.4 & 0.8876 & 0.2449 & 0.1978\\
881 {SUNIWARD} & 0.1 & 0.6542 & 0.4212 & 0.3972\\
882 & 0.2 & 0.7607 & 0.3493 & 0.3170\\
883 & 0.3 & 0.8390 & 0.2863 & 0.2511\\
884 & 0.4 & 0.8916 & 0.2319 & 0.1977\\
886 {MVG} & 0.1 & 0.6340 & 0.4310 &0.4124 \\
887 & 0.2 & 0.7271 & 0.3726 &0.3399 \\
888 & 0.3 & 0.7962 & 0.3185& 0.2858\\
889 & 0.4 & 0.8486& 0.2719& 0.2353 \\
891 {HUGO} & 0.1 & 0.6967 & 0.3982 & 0.3626 \\
892 & 0.2 & 0.8012 & 0.3197 & 0.2847 \\
893 & 0.3 & 0.8720 & 0.2557 & 0.2212 \\
894 & 0.4 & 0.9517 & 0.1472 & 0.1230 \\
896 {$Ky$ based approach}
897 & 0.1 & 0.7378 & 0.3768 & 0.3306 \\
898 & 0.2 & 0.8568 & 0.2839 & 0.2408 \\
899 & 0.3 & 0.9176 & 0.2156 & 0.1710 \\
900 & 0.4 & 0.9473 & 0.1638 & 0.1324\\
902 {$Ko$ based approach}
903 & 0.1 & 0.6831 & 0.3696 & 0.3450 \\
904 & 0.2 & 0.8524 & 0.1302 & 0.2408 \\
905 & 0.3 & 0.9132 & 0.1023 & 0.1045 \\
906 & 0.4 & 0.9890 & 0.0880 & 0.0570 \\
916 %\includegraphics[width=9cm]{22}
917 %\caption{Average Testing Error(ATE) }
923 %\includegraphics[width=9cm]{11}
924 %\caption{Out of Bag($E_{\textit{OOB}}$)}
930 %\includegraphics[width=9cm]{33}
931 %\caption{Area Under Curve(AUC)} %\label{fig3}
937 The first contribution of this paper is to propose of a distortion
938 function which is based on second order derivatives. These
939 partial derivatives allow to accurately compute
940 the level curves and thus to look favorably on pixels
941 without clean level curves.
942 Two approaches to build these derivatives have been proposed.
943 The first one is based on revisiting kernels usually embedded
944 in edge detection algorithms.
945 The second one is based on the polynomial approximation
947 These two methods have been completely implemented.
948 The first experiments have shown that the security level
949 is slightly inferior the one of the most stringent approaches. These first promising results encourage us to deeply investigate this research direction.
951 Future works aiming at improving the security of this proposal are planned as follows. The authors want first to focus on other approaches to provide second order derivatives with larger discrimination power.
952 Then, the objective will be to deeply investigate whether the H\"older norm is optimal when the objective is to avoid null second order derivatives, and to give priority to the largest second order values.