1 La plupart des schémas de stéganographie sont conçus de sorte à minimiser une
2 fonction de distortion. Dans les exemples du chapitre précédent,
3 ces fonctions de distortion sont construites dans l'obejctif de préserver
4 les caractéristiques de l'images.
5 On comprend aisément que dans des régions uniformes ou sur des bords clairement définis,
6 une modification même mineure de l'image est facilement détectable.
7 Au contraire dans les textures, le bruit ou les régions chaotiques
8 sont difficiles à modéliser. Les caractéristiques des images
9 dont ces zones ont été modifiées sont ainsi similaires à celles
12 Ces régions sont caractérisées par des courbes de niveau très perturbées.
13 Ce chapitre présente une nouvelle fonction de distortion pour la stéganographie
14 qui est basée sur les dérivées du second ordre, l'outil mathématique usuel
15 pour les courbes de niveau.
17 Pour peu qu'on sache définir une fonction $P$
18 qui associe à chaque pixel $(x,y)$ sa valeur $P(x,y)$,
19 les pixels tels que les dérivées secondes de $P$ ont des valeurs élevées
20 sont des bon candidats pour contenir un bit du message.
21 Cependant, une telle fonction $P$ n'est connue que de manière discrète,
22 \textit{i.e.}, en un nombre fini de points.
23 Les dérivées premières et secondes ne peuvent donc pas être évaluées mathématiquement.
24 Au mieux, on peut construire une fonction qui approxime ces $P$ sur cet ensemble
25 de pixels. Ordonner alors les pixels selon la matrice hessienne
26 (\textit{i.e.}, la matrice des dérivées secondes) n'est pas trivial puisque celle-ci
27 contient de nombreuses valeurs pour un seul pixel donné.
29 On verra dans ce chapitre comment des approximations des dérivées
30 premières et secondes pour des images numriques (Section~\ref{sec:gradient}) on peu être
32 Deux propositions de dérivées secondes sont ensuite
33 données et prouvées (Section~\ref{sec:second} et Section~\ref{sec:poly}).
34 Une adaptation d'une fonction de distortion existante est étudiée
35 en Section~\ref{sec:distortion} et des expériences sont présentées
36 en Section~\ref{sec:experiments}.
40 \section{Des dérivées dans une image}\label{sec:gradient}
42 Cette section rappelle d'abord les liens entre lignes de niveau, gradient et
43 matrice hessienne puis analyse ensuite leur construction à l'aide
44 de noyaux de la théorie du signal.
47 \subsection{Matrice hessienne}\label{sub:general}
48 On considère qu'une image peut être assimilée à une fonction de $\R^+\times \R^+$
49 dans $\R^+$ telle que la valeur $P(x,y)$ est associée à chaque pixel de coordonnées $(x,y)$.
50 Les variations d'une telle fonction en $(x_0,y_0)$ peuvent être évaluées
52 $\nabla{P}(x_0,y_0) = \left(\frac{\partial P}{\partial x}(x_0,y_0),\frac{\partial P}{\partial y}(x_0,y_0)\right).
54 Le vecteur gradient pointe dans la direction où la fonction a le plus fort acroissement.
55 Des pixels ayant des valeurs voisines sont sur des lignes de niveaux qui sont orthogonales
58 Les variations du vecteur gradient s'expriment usuellement à l'aide de la matrice
59 hessienne $H$ des dérivées partielles de second ordre de $P$.
62 \dfrac{\partial^2 P}{\partial x^2} &
63 \dfrac{\partial^2 P}{\partial x \partial y} \\
64 \dfrac{\partial^2 P}{\partial y \partial x} &
65 \dfrac{\partial^2 P}{\partial y^2} \\
69 En un pixel $(x_0,y_0)$, plus les valeurs de cette matrice sont éloignées de zéro,
70 plus le gradient varie en ce point. Evaluer ce type de matrice est ainsi primordial
71 en stéganographie. Cependant cette tâche n'est pas aussi triviale qu'elle n'y
72 paraît puisque les images naturelles ne sont pas définies à l'aide
73 de fonction différentiables de $\R^+\times \R^+$
74 dans $\R^+$. La suite montre comment obtenir des approximations de telles matrices.
76 \subsection{Approches classiques pour évaluer le gradient dans des images}\label{sub:class:1}
77 Dans ce contexte, les approches les plus utilisées pour évaluer un gradient
78 sont ``Sobel'', ``Prewitt'', ``Différence centrale'' et `` Difference intermediaire''.
79 Chacune de ces approches applique un produit de convolution $*$ entre un noyau $K$
80 (rappelé dans le tableau~\ref{table:kernels:usual}) et une fenêtre $A$ de taille
81 $3\times 3$. Le résultat
82 $A * K$ est une approximation du gradient horizontal
83 \textit{i.e.}, $\dfrac{\partial P}{\partial x}$.
84 Soit $K\rl$ le résultat de la rotation d'un angle $\pi/2$ sur $K$.
85 La composante verticale du gradient, $\dfrac{\partial P}{\partial y}$ est obtenue
86 de manière similaire en évaluant $A * K\rl$. Lorsqu'on applique ceci sur toute
87 la matrice image, on obtient peu ou prou une matrice de même taille pour chacune des
90 Les deux éléments de la première ligne (respectivement de la seconde ligne)
91 de la matrice hessienne
92 sont le résultat du calcul du gradient sur la matrice $\dfrac{\partial P}{\partial x}$
93 (resp. sur la matrice $\dfrac{\partial P}{\partial y}$).
97 \caption{Noyaux usuels pour évaluer des gradients d'images\label{table:kernels:usual}
101 \begin{tabular}{|c|c|c|}
103 Nom& Sobel & Prewitt \\
105 Noyau & $\textit{Ks}= \begin{bmatrix} -1 & 0 & +1 \\ -2 & 0 & +2 \\ -1 & 0 & +1 \end{bmatrix} $ &
106 $\textit{Kp}= \begin{bmatrix} -1 & 0 & +1 \\ -1 & 0 & +1 \\ -1 & 0 & +1 \end{bmatrix} $\\
108 Nom & Difference & Différence \\
109 & centrale & Intermediaire \\
111 Noyau & $\textit{Kc}= \begin{bmatrix} 0&0&0 \\ -\dfrac{1}{2} & 0 & +\dfrac{1}{2} \\ 0&0&0 \end{bmatrix} $ &
112 $\textit{Ki}= \begin{bmatrix} 0&0&0 \\ 0 & -1 & 1 \\ 0&0&0 \end{bmatrix} $ \\
120 \subsection{Matrices hessiennes induites par des approches
121 de gradient d'images}\label{sub:class:2}
123 $\dfrac{\partial^2 P}{\partial x \partial y} $ est égal à
124 $\dfrac{\partial^2 P}{\partial y \partial x}$ si
125 les méthode qui calculent le grandient et le gradient du gradient (la matrice hessienne)
128 For instance, in the Sobel approach,
129 it is easy to verify that the calculus of
130 $\dfrac{\partial^2 P}{\partial x \partial y}$
132 $\dfrac{\partial^2 P}{\partial y \partial x}$
134 of a convolution product with the Kernel
135 $\textit{Ks}''_{xy}$ given in Table~\ref{table:hessian:usual}.
136 This one summarizes kernels
139 that allow to respectively compute
140 $\dfrac{\partial^2 P}{\partial x^2}$ and
141 $\dfrac{\partial^2 P}{\partial x \partial y}$ with a convolution product
142 for each of the usual image gradient operator.
145 \caption{Kernels of second order gradient operators\label{table:hessian:usual}
148 \begin{tabular}{|c|c|}
155 1 & 0 & -2 & 0 & 1 \\
156 4 & 0 & -8 & 0 & 4 \\
157 6 & 0 & -12 & 0 & 6 \\
158 4 & 0 & -8 & 0 & 4 \\
166 1 & 0 & -2 & 0 & 1 \\
167 2 & 0 & -4 & 0 & 2 \\
168 3 & 0 & -6 & 0 & 3 \\
169 2 & 0 & -4 & 0 & 2 \\
178 -1 & -2 & 0 & 2 & 1 \\
179 -2 & -4 & 0 & 4 & 2 \\
181 2 & 4 & 0 & -4 & -2 \\
189 -1 & -1 & 0 & 1 & 1 \\
190 -1 & -1 & 0 & 1 & 1 \\
192 1 & 1 & 0 & -1 & -1 \\
198 Central & Intermediate \\
199 Difference &Difference \\
206 \dfrac{1}{4} & 0 & -\dfrac{1}{2} & 0 & \dfrac{1}{4} \\
217 0 & 0 & 1 & -2 & 1 \\
227 -\dfrac{1}{4} & 0 & \dfrac{1}{4}\\
229 \dfrac{1}{4} & 0 & -\dfrac{1}{4}
246 The Sobel kernel $\textit{Ks}_{x^2}''$ allows to detect whether the central
247 pixel belongs to a ``vertical'' edge, even if this one is noisy, by considering its
248 vertical neighbours. The introduction of these vertical neighbours in this kernel is meaningful
249 in the context of finding edges, but not very accurate when the objective is to
250 precisely find the level curves of the image.
251 Moreover, all the pixels that are in the second and the fourth column in this kernel
253 The Prewitt Kernel has similar drawbacks in this context.
256 The Central Difference kernel $\textit{Kc}_{x^2}''$ is not influenced by
257 the vertical neighbours of the central pixel and is thus more accurate here.
258 However, the kernel $\textit{Kc}_{xy}''$ again looses the values of the pixels that
259 are vertically and diagonally aligned with the central one.
261 Finally, the Intermediate Difference kernel $\textit{Ki}_{x^2}''$ shifts
262 to the left the value of horizontal variations of $\dfrac{\partial P}{\partial x}$:
263 the central pixel $(0,0)$ exactly receives the value
264 $\dfrac{P(0,2)-P(0,1)}{1} - \dfrac{P(0,1)-P(0,0)}{1}$,
265 which is an approximation of
266 $\dfrac{\partial P}{\partial x}(0,1)$ and not of
267 $\dfrac{\partial P}{\partial x}(0,0)$.
268 Furthermore the Intermediate Difference kernel $\textit{Ki}_{xy}''$ only deals
269 with pixels in the upper right corner, loosing all the other information.
271 Due to these drawbacks, we are then left to produce another approach to find the level curves with strong accuracy.
273 \section{Second Order Kernels for Accurate Level Curves}\label{sec:second}
275 This step aims at finding
276 accurate level curve variations in an image.
277 We do not restrict the kernel to have a fixed size (\textit{e.g.}, $3\times3$ or $5 \times 5$ as in the
278 aforementioned schemes).
279 This step is thus defined with kernels of size
280 $(2n+1)\times (2n+1)$, $n \in \{1,2,\dots,N\}$, where
281 $N$ is a parameter of the approach.
283 The horizontal gradient variations are thus captured thanks to $(2n+1)\times (2n+1)$ square kernels
287 \def\arraystretch{1.4}
288 \def\arraystretch{1.4}
291 \begin{array}{ccccccccc}
292 0 & & & & \dots& & & & 0 \\
293 \vdots & & & & & & & & \vdots \\
294 0 & & & & \dots & & & & 0 \\
295 \dfrac{1}{2n}& 0 & \dots & 0 & -\dfrac{2}{2n} & 0 & \dots & 0& \dfrac{1}{2n} \\
296 0 & & & & \dots& & & & 0 \\
297 \vdots & & & & & & & & \vdots \\
298 0 & & & & \dots & & & & 0
304 When the convolution product is applied on a $(2n+1)\times(2n+1)$ window,
306 $\dfrac{1}{2}\left(\dfrac{P(0,n)-P(0,0)}{n} - \dfrac{P(0,0)-P(0,-n)}{n}\right)$, which is indeed
307 the variation between the gradient around the central pixel.
308 This proves that this calculus is a correct approximation of
309 $\dfrac{\partial^2 P}{\partial x^2}$.
311 When $n$ is 1, this kernel is a centered version of the horizontal Intermediate Difference kernel $\textit{Ki}_{x^2}''$ modulo a multiplication by $1/2$. When $n$ is 2, this kernel is equal to $\textit{Kc}_{x^2}''$.
313 The vertical gradient variations are again obtained by applying
314 a $\pi/2$ rotation to each horizontal kernel $\textit{Ky}_{x^2}''$.
316 The diagonal gradient variations are obtained thanks to the $(2n+1)\times (2n+1)$ square kernels
317 $\textit{Ky}_{xy}''$ defined by
322 \def\arraystretch{1.4}
323 \textit{Ky}_{xy}'' = \dfrac{1}{4}
325 \begin{array}{ccccccccc}
326 \frac{1}{n^2}& \dots & \frac{1}{2n} & \frac{1}{n}
328 -\frac{1}{n}&-\frac{1}{2n} & \dots & -\frac{1}{n^2}
342 0 & & & & \dots& & & & 0 \\
347 -\frac{1}{2n} & 0 & &
355 -\frac{1}{n^2}& \dots & -\frac{1}{2n} & -\frac{1}{n}
357 \frac{1}{n}& \frac{1}{2n} & \dots & \frac{1}{n^2}
363 When $n$ is 1, $\textit{Ky}_{xy}''$ is equal to the kernel
364 $\textit{Kc}_{xy}''$, and %.
367 the average vertical variations of the horizontal variations are
372 \left((P(0,1)-P(0,0))-(P(1,1)-P(1,0))\right)+ \right.\\
373 \quad \left((P(-1,1)-P(-1,0))-(P(0,1)-P(0,0))\right)+\\
374 \quad \left((P(0,0)-P(0,-1))-(P(1,0)-P(1,-1))\right)+\\
375 \quad \left. \left((P(-1,0)-P(-1,-1))-(P(0,0)-P(0,-1))\right)
378 \left[ P(1,-1) -P(1,1) - P(-1,-1) + P(-1,1)\right].
381 which is $\textit{Ky}_{xy}''$.
383 Let us now consider any number $n$, $1 \le n \le N$.
384 Let us first investigate the vertical variations related to
385 the horizontal vector $\vec{P_{0,0}P_{0,1}}$
386 (respectively $\vec{P_{0,-1}P_{0,0}}$)
387 of length 1 that starts from (resp. that points to) $(0,0)$.
388 As with the case $n=1$, there are 2 new vectors of
390 $\vec{P_{n,0}P_{n,1}}$ and $\vec{P_{-n,0}P_{-n,1}}$
392 $\vec{P_{n,-1}P_{n,0}}$, and $\vec{P_{-n,-1}P_{-n,0}}$)
393 that are vertically aligned with $\vec{P_{0,0}P_{0,1}}$
394 (resp. with $\vec{P_{0,-1}P_{0,0}}$).
396 The vertical variation is now equal to $n$. Following the case where $n$ is 1 to compute the average variation,
397 the coefficients of the first and last line around the central
398 vertical line are thus from left to right:
401 $\dfrac{-1}{4n}$, and
404 Cases are similar with vectors $\vec{P_{0,0}P_{0,1}}$, \ldots
405 $\vec{P_{0,0}P_{0,n}}$ which respectively lead to coefficients
406 $-\dfrac{1}{4 \times 2n}$, \ldots,
407 $-\dfrac{1}{4 \times n.n}$, and the proof is omitted.
408 Finally, let us consider the vector $\vec{P_{0,0}P_{0,1}}$
409 and its vertical variations when $\delta y$ is $n-1$.
410 As in the case where $n=1$, we thus obtain the coefficients
411 $\dfrac{1}{4 \times (n-1)n}$ and
412 $-\dfrac{1}{4 \times (n-1)n}$
413 (resp. $-\dfrac{1}{4 \times (n-1)n}$ and
414 $\dfrac{1}{4 \times (n-1)n}$)
415 in the second line (resp. in the
416 penultimate line) since the vector has length $n$
417 and $\delta y$ is $n-1$.
418 Coefficient in the other lines are similarly obtained and the proof is thus omitted.
420 We are then left to compute an approximation of the partial second order derivatives
421 $\dfrac{\partial^2 P}{\partial x^2}$, $\dfrac{\partial^2 P}{\partial y^2}$, and $\dfrac{\partial^2 P}{\partial x \partial y}$
423 $\textit{Ky}_{x^2}''$, $\textit{Ky}_{y^2}''$, and $\textit{Ky}_{xy}''$ respectively.
424 However, the size of each of these kernels is varying from $3\times3$ to $(2N+1)\times (2N+1)$.
425 Let us explain the approach on the former partial derivative.
426 The other can be immediately deduced.
428 Since the objective is to detect large variations, the second order derivative is approximated as
429 the maximum of the approximations. More formally,
430 let $n$, $1 \le n \le N$, be an integer number and
431 $\dfrac{\partial^2 P}{\partial x^2}_n$ be the result of applying the Kernel
432 $\textit{Ky}_{x^2}''$ of size $(2n+1)\times (2n+1)$. The derivative
433 $\dfrac{\partial^2 P}{\partial x^2}$ is defined by
436 \dfrac{\partial^2 P}{\partial x^2}
438 \abs{\dfrac{\partial^2 P}{\partial x^2}_1}, \dots, \abs{\dfrac{\partial^2 P}{\partial x^2}_N}
443 The same iterative approach is applied to compute approximations of
444 $\dfrac{\partial^2 P}{\partial y \partial x}$
446 $\dfrac{\partial^2 P}{\partial y^2}$.
447 Next section studies the suitability of approximating second order derivatives
448 when considering an image as a polynomial.
451 \section{Polynomial Interpolation of Images for Hessian Matrix Computation}\label{sec:poly}
452 Let $P(x,y)$ be the discrete value of the pixel $(x,y)$ in the image.
453 Let $n$, $1 \le n \le N$, be an integer such that the objective is to find a polynomial interpolation
454 on the $(2n+1)\times(2n+1)$ window where the central pixel has index $(0,0)$.
455 There exists an unique polynomial $L : \R\times \R \to \R$ of degree $(2n+1)\times(2n+1)$ defined
456 such that $L(x,y)=P(x,y)$ for each pixel $(x,y)$ in this window.
457 Such a polynomial is defined by
465 \prod_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
469 \prod_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
475 It is not hard to prove that the first order horizontal derivative of the polynomial $L(x,y)$
479 \dfrac{\partial L}{\partial x} =
484 \prod_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
489 \sum_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
491 \prod_{\stackrel{-n\leq i''\leq n}{i''\neq i,i'}}
496 \noindent and thus to deduce that the
497 second order ones are
501 \dfrac{\partial^2 L}{\partial x^2} =
506 \prod_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
511 \sum_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
513 \sum_{\stackrel{-n\leq i''\leq n}{i''\neq i,i'}}
515 \prod_{\stackrel{-n\leq i'''\leq n}{i'''\neq i,i',i''}}
516 \frac{x-i'''}{i-i'''}
519 \label{eq:deriv:poly:x2}
524 \dfrac{\partial^2 L}{\partial y \partial x} =
529 \sum_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
531 \prod_{\stackrel{-n\leq j''\leq n}{j''\neq j, j'}}
536 \sum_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
538 \prod_{\stackrel{-n\leq i''\leq n}{i''\neq i, i'}}
542 \label{eq:deriv:poly:yx}
545 These second order derivatives are computed for each moving
546 window and are associated to the central pixel, \textit{i.e.}, to the pixel $(0,0)$ inside this one.
548 Let us first simplify $\dfrac{\partial^2 L}{\partial x^2}$ when $(x,y)=(0,0)$
549 defined in Equation~(\ref{eq:deriv:poly:x2}). If $j$ is not null, the index $j'$
550 is going to be null and the product
552 \prod_{\stackrel{-n\leq j'\leq n}{j'\neq j}}
554 \right)$ is null too.
555 In this equation, we thus only consider $j=0$.
556 It is obvious that the product indexed with $j'$ is thus equal to 1.
557 This equation can thus be simplified in:
561 \dfrac{\partial^2 L}{\partial x^2} =
566 \sum_{\stackrel{-n\leq i'\leq n}{i'\neq i}}
568 \sum_{\stackrel{-n\leq i''\leq n}{i''\neq i,i'}}
570 \prod_{\stackrel{-n\leq i'''\leq n}{i'''\neq i,i',i''}}
574 \label{eq:deriv:poly:x2:simpl}
582 \dfrac{\partial^2 L}{\partial x^2} =
587 \sum_{\stackrel{-n\leq i' < i'' \le n}{i',i''\neq i}}
588 \frac{2}{(i-i')(i-i'')}
589 \prod_{\stackrel{-n\leq i'''\leq n}{i'''\neq i,i',i''}}
593 \label{eq:deriv:poly:x2:simpl:2}
596 From this equation, the kernel allowing to evaluate horizontal
597 second order derivatives
598 can be computed for any $n$.
599 It is further denoted as $Ko''_{x^2}$.
600 Instances of such matrix when $n=2$, $3$, and $4$
601 are given in Table~\ref{table:sod:hori:poly}.
604 \caption{Kernels $Ko''_{x^2}$
605 for second order horizontal derivatives induced
606 by polynomial interpolation}
609 \def\arraystretch{1.4}
610 \begin{tabular}{|c|c|}
612 $n$ & $Ko''_{x^2}$ \\
614 $2$ & $\left[\dfrac{-1}{12}, \dfrac{4}{3} , \dfrac{-5}{2}, \dfrac{4}{3} \dfrac{-1}{12}\right]$ \\
616 $3$ & $\left[\dfrac{1}{90}, \dfrac{-3}{20}, \dfrac{3}{2}, \dfrac{-49}{18}, \dfrac{3}{2}, \dfrac{-3}{20}, \dfrac{1}{90}\right]$ \\
618 $4$ & $\left[\dfrac{-1}{560}, \dfrac{8}{315}, \dfrac{-1}{5}, \dfrac{8}{5}, \dfrac{-205}{72}, \dfrac{8}{5}, \dfrac{-1}{5}, \dfrac{8}{315}, \dfrac{-1}{560}\right]$\\
620 %$5$ & $\left[\dfrac{1}{3150}, \dfrac{-5}{1008}, \dfrac{5}{126}, %\dfrac{-5}{21}, \dfrac{5}{3}, \dfrac{-5269}{1800}, \dfrac{5}{3}, %\dfrac{-5}{21}, \dfrac{5}{126}, \dfrac{-5}{1008}, \dfrac{1}{3150}\right]$\\
624 \label{table:sod:hori:poly}
629 \caption{Kernels for second order diagonal derivatives induced
630 by polynomial interpolation \label{table:sod:diag:poly}
634 \def\arraystretch{1.5}
635 \begin{tabular}{|c|c|}
641 \dfrac{1}{4} & 0 & \dfrac{-1}{4}\\
643 \dfrac{-1}{4} & 0 & \dfrac{1}{4}\\
650 \dfrac{1}{144} & \dfrac{-1}{18} & 0 & \dfrac{1}{18} & \dfrac{-1}{144}\\
651 \dfrac{-1}{18} & \dfrac{4}{9} & 0 & \dfrac{-4}{9} & \dfrac{1}{18}\\
653 \dfrac{1}{18} & \dfrac{-4}{9} & 0 & \dfrac{4}{9} & \dfrac{-1}{18}\\
654 \dfrac{-1}{144} & \dfrac{1}{18} & 0 & \dfrac{-1}{18} & \dfrac{1}{144}
662 From Equation~(\ref{eq:deriv:poly:yx}), kernels allowing to evaluate diagonal
663 second order derivatives (\textit{i.e.},
664 $\dfrac{\partial^2 L}{\partial y \partial x}$) are computed.
665 They are denoted as $Ko''_{xy}$.
666 Table~\ref{table:sod:diag:poly} gives two examples of them when $n=1$ and $n=2$.
667 Notice that for $n=1$, the kernel $Ko''_{xy}$ is equal to $Kc''_{xy}$.
674 \section{Distortion Cost}\label{sec:distortion}
676 The distortion function has to associate to each pixel $(i,j)$
677 the cost $\rho_{ij}$ of its modification
680 The objective is to map a small value to a pixel when all its second order derivatives
681 are high and a large value otherwise.
682 %\YF{The WOW use the wavelet relative distortion (UNIWARD) as a function to force the embedding operation to the noisy areas and avert the smooth areas.}
683 In WOW and UNIWARD the distortion function is based on the H\"older norm with
687 \abs{\xi_{ij}^h}^{p} +
688 \abs{\xi_{ij}^v}^{p} +
690 \right)^{-\frac{1}{p}}
692 where $p$ is a negative number and
693 $\xi_{ij}^h$ (resp. $\xi_{ij}^v$ and $\xi_{ij}^d$)
694 represents the horizontal (resp. vertical and diagonal) suitability.
695 A small suitability in one direction means an inaccurate position to embed a message.
697 We propose here to adapt such a distortion cost as follows:
701 \abs{\dfrac{\partial^2 P}{\partial x^2}(i,j)} +
702 \abs{\dfrac{\partial^2 P}{\partial y^2}(i,j)} +
703 \abs{\dfrac{\partial^2 P}{\partial y \partial x}(i,j)}
704 \right)^{-\frac{1}{p}}
706 It is not hard to check that such a function has large
707 value when at least one of its derivatives is null. Otherwise,
708 the larger the derivatives are, the smaller the returned value
714 %Practically, it is computed as a double convolution product with Daubechies Wavelet Kernels.
716 %It is not hard to see that this approach
717 %does not consider the range of the values. For %instance, if one direction only has large suitability
718 %absolute values, all the elements $\abs{\xi_{ij}}^{p}$ are so small, and
719 %suitable pixels cannot be distinguished from inaccurate ones. The distortion function has to
720 %provide a way to discriminate pixels even when the distribution has small deviation
722 %Because of each pixel value ranges in $[0,255]$, each first order derivative belongs to
723 %$[-255,255]$ and thus each second order derivative has a real value in
725 %Due to the definition of the approximation of $\dfrac{\partial^2 P}{\partial x^2}$,
726 %this one is a positive or null real number lower than 255.
728 %The higher the second order derivatives are,
731 %To enlarge the discrimination power,
732 %data are firstly linearly mapped to $[0,1]$.
734 %$(\dfrac{\partial^2 P}{\partial x^2} , \dfrac{\partial^2 P}{\partial y^2}, \dfrac{\partial^2 %P}{\partial x \partial y})$ belongs to $[0,1]\times[0,1]\times[0,1] = [0,1]^3$.
736 %Basically, for each dimension the idea is to select a subset of pixels of size $k$ and next to retain those which have been selected three times, \textit{i.e.}, those with large values
738 %$(\dfrac{\partial^2 P}{\partial x^2} , \dfrac{\partial^2 P}{\partial y^2}, \dfrac{\partial^2 %P}{\partial x \partial y})$.
739 %Since the dimension is $3$, the size $k$ can be defined by
741 %k = W \times H \times \alpha ^{\dfrac{1}{3}}
743 %where $W$ and $H$ respectively stand for the width and the length of the image (expressed in pixels)
744 %and $\alpha$ is the payload. For instance if $\alpha$ is 0.1,
745 %about $0.46\%$ of the image pixels with higher values in $\dfrac{\partial^2 P}{\partial x^2}$ should be selected, and similarly with $\dfrac{\partial^2 P}{\partial y^2}$ and $\dfrac{\partial^2 P}{\partial x \partial y}$.
748 % Dire comment on les ordonne, les autres et le sens
752 \begin{tabular}{|c|c|c|}
754 Scheme & Stego. content & Changes with cover \\
756 $Ky$ based approach &\includegraphics[scale=0.20]{images/38_dp}&
757 \includegraphics[scale=0.20]{images/38_dp_diff} \\
759 $Ko$ based approach & \includegraphics[scale=0.20]{images/38_poly} &
760 \includegraphics[scale=0.20]{images/38_poly_diff} \\
763 \caption{Embedding changes instance with payload $\alpha = 0.4$}
768 \section{Experiments}\label{sec:experiments}
770 First of all, the whole steganographic approach code is available online\footnote{\url{https://github.com/stego-content/SOS}}.
772 Figure~\ref{fig:oneimage} presents the results of embedding data in a cover image
773 from the BOSS contest database~\cite{Boss10} with respect to the two second order derivative schemes presented in this work.
774 The $Ky$ based approach (resp. the $Ko$ based one)
775 corresponds to the scheme detailed in Section~\ref{sec:second}
776 (resp. in Section~\ref{sec:poly}).
777 The payload $\alpha$ is set to 0.4 and kernels are computed with $N=4$.
778 The central column outputs the embedding result whereas the right one displays
779 differences between the cover image and the stego one.
780 It can be observed that pixels in smooth area (the sky, the external access steps) and pixels in clean edges (the columns,
781 the step borders) are not modified by the approach.
782 On the contrary, an unpredictable area (a monument for example) concentrates pixel changes.
785 \subsection{Choice of parameters}
787 The two methods proposed in Section~\ref{sec:second} and
788 in Section~\ref{sec:poly} are based on kernels of size up to
789 $(2N+1)\times(2N+1)$. This section aims at finding the
790 value of the $N$ parameter that maximizes the security level.
791 For each approach, we have built 1,000 stego images with
792 $N=2$, $4$, $6$, $8$, $10$, $12$, and $14$ where the covers belong
793 to the BOSS contest database.
794 This set contains 10,000 grayscale $512\times 512$ images in a RAW format.
795 The security of the approach has been evaluated thanks to the Ensemble Classifier~\cite{DBLP:journals/tifs/KodovskyFH12} based steganalyser,
796 which is considered as a state of the art steganalyser tool.
797 This steganalysis process embeds the rich model (SRM) features~\cite{DBLP:journals/tifs/FridrichK12}
799 For a payload $\alpha$, either equal to $0.1$ or to $0.4$, average testing errors (expressed in percentages) have been studied and are summarized in
800 Table~\ref{table:choice:parameter}.
803 \caption{Average Testing Errors with respect to the the Kernel Size}
806 \setlength{\tabcolsep}{3pt}
807 \begin{tabular}{|c|c|c|c|c|c|c|c|c|}
809 \multicolumn{1}{c|}{} & \multirow{2}{*}{$\alpha$} & \multicolumn{7}{c|}{$N$} \\
811 \multicolumn{1}{c|}{}& & $2$ & $4$& $6$& $8$& $10$& $12$ & $14$ \\
814 & \textit{0.1} & 39& 40.2& 39.7& 39.8& 40.1& $39.9$& $39.8$ \\
816 error for Kernel $K_y$ & \textit{0.4}& 15& 18.8& 19.1& 19.0& 18.6& 18.7 & 18.7 \\
818 Average testing & \textit{0.1} & 35.2 & 36.6& 36.7& 36.6& 37.1& 37.2 & 37.2 \\
820 error for Kernel $K_o$ & \textit{0.4} & 5.2 & 6.8& 7.5 & 7.9 & 8.1 & 8.2 & 7.6 \\
823 \label{table:choice:parameter}
827 Thanks to these experiments, we observe that the size $N=4$ (respectively $N=12$) obtains sufficiently large average testing errors for the $Ky$ based approach
828 (resp. for the $Ko$ based one). In what follows, these values are retained for these two methods.
830 \subsection{Security Evaluation}
831 As in the previous section, the BOSS contest database has been retained.
832 To achieve a complete comparison with other steganographic tools,
833 the whole database of 10,000 images has been used.
834 Ensemble Classifier with SRM features is again used to evaluate the security of the approach.
836 We have chosen 4 different payloads, 0.1, 0.2, 0.3, and 0.4, as in many steganographic evaluations.
837 Three values are systematically given for each experiment:
838 the area under the ROC curve (AUC),
839 the average testing error (ATE),
840 and the OOB error (OOB).
842 All the results are summarized in Table~\ref{table:experiments:summary}.
843 %and in Figure~\ref{fig1}, Figure~\ref{fig2}, and Figure~\ref{fig3}.
844 Let us analyse these experimental results.
845 The security approach is often lower than those observed with state of the art tools:
846 for instance with payload $\alpha=0.1$, the most secure approach is WOW
847 with an average testing error equal to 0.43 whereas our approach reaches 0.38.
848 However these results are promising and for two reasons.
849 First, our approaches give more resistance towards Ensemble Classifier (contrary to HUGO)
851 Secondly, without any optimisation, our approach is not so far from state of the art steganographic tools.
852 Finally, we explain the lack of security of the $Ko$ based approach with large payloads as follows:
853 second order derivatives are indeed directly extracted from polynomial interpolation.
854 This easy construction however induces large variations between the polynomial $L$ and
855 the pixel function $P$.
858 \caption{Summary of experiments}\label{table:experiments:summary}
861 \begin{tabular}{|l|l|l|l|l|}
864 & Payload & AUC & ATE & OOB \\ \hline
866 & 0.1 & 0.6501 & 0.4304 & 0.3974\\
867 & 0.2 & 0.7583 & 0.3613 & 0.3169\\
868 & 0.3 & 0.8355 & 0.2982 & 0.2488\\
869 & 0.4 & 0.8876 & 0.2449 & 0.1978\\
872 {SUNIWARD} & 0.1 & 0.6542 & 0.4212 & 0.3972\\
873 & 0.2 & 0.7607 & 0.3493 & 0.3170\\
874 & 0.3 & 0.8390 & 0.2863 & 0.2511\\
875 & 0.4 & 0.8916 & 0.2319 & 0.1977\\
877 {MVG} & 0.1 & 0.6340 & 0.4310 &0.4124 \\
878 & 0.2 & 0.7271 & 0.3726 &0.3399 \\
879 & 0.3 & 0.7962 & 0.3185& 0.2858\\
880 & 0.4 & 0.8486& 0.2719& 0.2353 \\
882 {HUGO} & 0.1 & 0.6967 & 0.3982 & 0.3626 \\
883 & 0.2 & 0.8012 & 0.3197 & 0.2847 \\
884 & 0.3 & 0.8720 & 0.2557 & 0.2212 \\
885 & 0.4 & 0.9517 & 0.1472 & 0.1230 \\
887 {$Ky$ based approach}
888 & 0.1 & 0.7378 & 0.3768 & 0.3306 \\
889 & 0.2 & 0.8568 & 0.2839 & 0.2408 \\
890 & 0.3 & 0.9176 & 0.2156 & 0.1710 \\
891 & 0.4 & 0.9473 & 0.1638 & 0.1324\\
893 {$Ko$ based approach}
894 & 0.1 & 0.6831 & 0.3696 & 0.3450 \\
895 & 0.2 & 0.8524 & 0.1302 & 0.2408 \\
896 & 0.3 & 0.9132 & 0.1023 & 0.1045 \\
897 & 0.4 & 0.9890 & 0.0880 & 0.0570 \\
907 %\includegraphics[width=9cm]{22}
908 %\caption{Average Testing Error(ATE) }
914 %\includegraphics[width=9cm]{11}
915 %\caption{Out of Bag($E_{\textit{OOB}}$)}
921 %\includegraphics[width=9cm]{33}
922 %\caption{Area Under Curve(AUC)} %\label{fig3}
928 The first contribution of this paper is to propose of a distortion
929 function which is based on second order derivatives. These
930 partial derivatives allow to accurately compute
931 the level curves and thus to look favorably on pixels
932 without clean level curves.
933 Two approaches to build these derivatives have been proposed.
934 The first one is based on revisiting kernels usually embedded
935 in edge detection algorithms.
936 The second one is based on the polynomial approximation
938 These two methods have been completely implemented.
939 The first experiments have shown that the security level
940 is slightly inferior the one of the most stringent approaches. These first promising results encourage us to deeply investigate this research direction.
942 Future works aiming at improving the security of this proposal are planned as follows. The authors want first to focus on other approaches to provide second order derivatives with larger discrimination power.
943 Then, the objective will be to deeply investigate whether the H\"older norm is optimal when the objective is to avoid null second order derivatives, and to give priority to the largest second order values.