X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/canny.git/blobdiff_plain/ded436ff1397fd455304cbae04f58e4c0703ea01..b4843601742819057d2d60095cc2adb85f5b267b:/experiments.tex?ds=sidebyside diff --git a/experiments.tex b/experiments.tex index 60b0f2f..e3d243b 100644 --- a/experiments.tex +++ b/experiments.tex @@ -1,6 +1,161 @@ +For whole experiments, the whole set of 10000 images +of the BOSS contest~\cite{Boss10} database is taken. +In this set, each cover is a $512\times 512$ +grayscale digital image in a RAW format. +We restrict experiments to +this set of cover images since this paper is more focussed on +the methodology than benchmarking. + + +\subsection{Adaptive Embedding Rate} + +Two strategies have been developed in our scheme, depending on the embedding rate that is either \emph{adaptive} or \emph{fixed}. + +In the former the embedding rate depends on the number of edge pixels. +The higher it is, the larger the message length that can be inserted is. +Practically, a set of edge pixels is computed according to the +Canny algorithm with an high threshold. +The message length is thus defined to be half of this set cardinality. +In this strategy, two methods are thus applied to extract bits that +are modified. The first one is a direct application of the STC algorithm. +This method is further referred to as \emph{adaptive+STC}. +The second one randomly chooses the subset of pixels to modify by +applying the BBS PRNG again. This method is denoted \emph{adaptive+sample}. +Notice that the rate between +available bits and bit message length is always equal to 2. +This constraint is indeed induced by the fact that the efficiency +of the STC algorithm is unsatisfactory under that threshold. +In our experiments and with the adaptive scheme, +the average size of the message that can be embedded is 16,445 bits. +Its corresponds to an average payload of 6.35\%. + + + + +In the latter, the embedding rate is defined as a percentage between the +number of modified pixels and the length of the bit message. +This is the classical approach adopted in steganography. +Practically, the Canny algorithm generates +a set of edge pixels related to a threshold that is decreasing until its cardinality +is sufficient. If the set cardinality is more than twice larger than the +bit message length, a STC step is again applied. +Otherwise, pixels are again randomly chosen with BBS. + + + +\subsection{Image Quality} +The visual quality of the STABYLO scheme is evaluated in this section. +For the sake of completeness, four metrics are computed in these experiments: +the Peak Signal to Noise Ratio (PSNR), +the PSNR-HVS-M family~\cite{psnrhvsm11}, +the BIQI~\cite{MB10}, and +the weighted PSNR (wPSNR)~\cite{DBLP:conf/ih/PereiraVMMP01}. +The first one is widely used but does not take into +account the Human Visual System (HVS). +The other ones have been designed to tackle this problem. + +\begin{table*} +\begin{center} +\begin{tabular}{|c|c|c||c|c|c|} +\hline +Schemes & \multicolumn{3}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}\\ +\hline +Embedding & \multicolumn{2}{|c||}{Adaptive} & Fixed & \multicolumn{2}{|c|}{Fixed} \\ +\hline +Rate & + STC & + sample & 10\% & 10\%&6.35\%\\ +\hline +PSNR & 66.55 & 63.48 & 61.86 & 64.65 & 67.08 \\ +\hline +PSNR-HVS-M & 78.6 & 75.39 & 72.9 & 76.67 & 79.23 \\ +\hline +BIQI & 28.3 & 28.28 & 28.4 & 28.28 & 28.28 \\ +\hline +wPSNR & 86.43& 80.59 & 77.47& 83.03 & 87.8\\ +\hline +\end{tabular} +\end{center} +\caption{Quality Measures of Steganography Approaches\label{table:quality}} +\end{table*} + +Let us give an interpretation of these experiments. +First of all, the adaptive strategy produces images with lower distortion +than the one of images resulting from the 10\% fixed strategy. +Numerical results are indeed always greater for the former strategy than +for the latter, except for the BIQI metrics where differences are not really relevant. +These results are not surprising since the adaptive strategy aims at +embedding messages whose length is decided according to an higher threshold +into the edge detection. +Let us focus on the quality of HUGO images: with a given fixed +embedding rate (10\%), +HUGO always produces images whose quality is higher than the STABYLO's one. +However, our approach nevertheless provides equivalent +results with the strategy +\emph{adaptive+STC} than HUGO with an average embedding rate set to +6.35\%. +This occurs with a lightweight manner, as motivated in the introduction. + + +Let us now compare the STABYLO approach with other edge based steganography +schemes with respect to the image quality. +First of all, the Edge Adaptive +scheme detailed in~\cite{Luo:2010:EAI:1824719.1824720} +executed with a 10\% embedding rate +has the same PSNR but a lower wPSNR than ours: +these two metrics are respectively equal to 61.9 and 68.9. +Next, both approaches~\cite{DBLP:journals/eswa/ChenCL10,Chang20101286} +focus on increasing the payload while the PSNR is acceptable, but do not +give quality metrics for fixed embedding rates from a large base of images. +Our approach outperforms the former thanks to the introduction of the STC +algorithm. + + + + \subsection{Steganalysis} -Détailler \cite{Fillatre:2012:ASL:2333143.2333587} -Vainqueur du BOSS challenge~\cite{DBLP:journals/tifs/KodovskyFH12} +The quality of our approach has been evaluated through the two +AUMP~\cite{Fillatre:2012:ASL:2333143.2333587} +and Ensemble Classifier~\cite{DBLP:journals/tifs/KodovskyFH12} based steganalysers. +Both aims at detecting hidden bits in grayscale natural images and are +considered as the state of the art of steganalysers in spatial domain~\cite{FK12}. +The former approach is based on a simplified parametric model of natural images. +Parameters are firstly estimated and an adaptive Asymptotically Uniformly Most Powerful +(AUMP) test is designed (theoretically and practically), to check whether +an image has stego content or not. +In the latter, the authors show that the +machine learning step, which is often +implemented as support vector machine, +can be favorably executed thanks to an ensemble classifier. + + + +\begin{table*} +\begin{center} +%\begin{small} +\begin{tabular}{|c|c|c|c|c|c|} +\hline +Schemes & \multicolumn{3}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}\\ +\hline +Embedding & \multicolumn{2}{|c|}{Adaptive} & Fixed & \multicolumn{2}{|c|}{Fixed} \\ +\hline +Rate & + STC & + sample & 10\% & 10\%& 6.35\%\\ +\hline +AUMP & 0.39 & 0.33 & 0.22 & 0.50 & 0.50 \\ +\hline +Ensemble Classifier & 0.47 & 0.44 & 0.35 & 0.48 & 0.49 \\ + +\hline +\end{tabular} +%\end{small} +\end{center} +\caption{Steganalysing STABYLO\label{table:steganalyse}} +\end{table*} + + +Results show that our approach is more easily detectable than HUGO, which +is the most secure steganographic tool, as far as we know. However due to its +huge number of features integration, it is not lightweight, which justifies +in the authors' opinion the consideration of the proposed method. +