X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/canny.git/blobdiff_plain/36f33066d192d54e79b234d89f223b3f4016ff12..51df4be4d70fabf22f3abc0fe92b515086f74021:/experiments.tex?ds=sidebyside diff --git a/experiments.tex b/experiments.tex index 45f0b87..9af03c6 100644 --- a/experiments.tex +++ b/experiments.tex @@ -1,112 +1,171 @@ -\subsection{Adaptive Embedding Rate} - -Two strategies have been developed in our scheme with respect to the rate of -embedding which is either \emph{ adaptive} or \emph{fixed}. - -In the former the embedding rate depends on the number of edge pixels. -The higher it is, the larger is the message length that can be considered. -Practically, a set of edge pixels is computed according to the -Canny algorithm with high threshold. -The message length is thus defined to be the half of this set cardinality. -The rate between available bits and bit message length is then more than two.This constraint is indeed induced by the fact that the efficiency -of the stc algorithm is unsatisfactory under that threshold. - - -In the latter, the embedding rate is defined as a percentage between the -number of the modified pixels and the length of the bit message. -This is the classical approach adopted in steganography. -Practically, the Canny algorithm generates a -a set of edge pixels with threshold that is decreasing until its cardinality -is sufficient. If the set cardinality is more than twice larger than the -bit message length an stc is again applied. -Otherwise, pixels are randomly chosen from the set of pixels to build the -subset with a given size. The BBS PRNG is again applied there. - +For whole experiments, the whole set of 10,000 images +of the BOSS contest~\cite{Boss10} database is taken. +In this set, each cover is a $512\times 512$ +grayscale digital image in a RAW format. +We restrict experiments to +this set of cover images since this paper is more focused on +the methodology than benchmarking. +Our approach is always compared to Hugo~\cite{DBLP:conf/ih/PevnyFB10} +and to EAISLSBMR~\cite{Luo:2010:EAI:1824719.1824720}. +The former is the least detectable information hiding tool in spatial domain +and the latter is the work that is the closest to ours, as far as we know. + + + +First of all, in our experiments and with the adaptive scheme, +the average size of the message that can be embedded is 16,445 bits. +Its corresponds to an average payload of 6.35\%. +The two other tools will then be compared with this payload. +Sections~\ref{sub:quality} and~\ref{sub:steg} respectively present +the quality analysis and the security of our scheme. + + -\subsection{Image Quality} +\subsection{Image quality}\label{sub:quality} The visual quality of the STABYLO scheme is evaluated in this section. -Four metrics are computed in these experiments : +For the sake of completeness, three metrics are computed in these experiments: the Peak Signal to Noise Ratio (PSNR), -the PSNR-HVS-M family~\cite{PSECAL07,psnrhvsm11} , -the BIQI~\cite{MB10,biqi11} and +the PSNR-HVS-M family~\cite{psnrhvsm11}, +%the BIQI~\cite{MB10}, +and the weighted PSNR (wPSNR)~\cite{DBLP:conf/ih/PereiraVMMP01}. The first one is widely used but does not take into -account Human Visual System (HVS). -The other last ones have been designed to tackle this problem. +account the Human Visual System (HVS). +The other ones have been designed to tackle this problem. + +If we apply them on the running example, +the PSNR, PSNR-HVS-M, and wPSNR values are respectively equal to +68.39, 79.85, and 89.71 for the stego Lena when $b$ is equal to 7. +If $b$ is 6, these values are respectively equal to +65.43, 77.2, and 89.35. -\begin{table} + + + +\begin{table*} \begin{center} -\begin{tabular}{|c|c|c|} +\begin{tabular}{|c|c|c||c|c|c|c|c|c|} \hline -Embedding rate & Adaptive & 10 \% \\ +Schemes & \multicolumn{4}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}& \multicolumn{2}{|c|}{EAISLSBMR} \\ \hline -PSNR & 66.55 & 61.86 \\ +Embedding & Fixed & \multicolumn{3}{|c|}{Adaptive (about 6.35\%)} & \multicolumn{2}{|c|}{Fixed}& \multicolumn{2}{|c|}{Fixed} \\ \hline -PSNR-HVS-M & 78.6 & 72.9 \\ +Rate & 10\% & + sample & +STC(7) & +STC(6) & 10\%&6.35\%& 10\%&6.35\%\\ \hline -BIQI & 28.3 & 28.4 \\ +PSNR & 61.86 & 63.48 & 66.55 (\textbf{-0.8\%}) & 63.7 & 64.65 & {67.08} & 60.8 & 62.9\\ \hline -wPSNR & 86.43& 77.47 \\ +PSNR-HVS-M & 72.9 & 75.39 & 78.6 (\textbf{-0.8\%}) & 75.5 & 76.67 & {79.23} & 71.8 & 74.3\\ +%\hline +%BIQI & 28.3 & 28.28 & 28.4 & 28.28 & 28.28 & 28.2 & 28.2\\ +\hline +wPSNR & 77.47 & 80.59 & 86.43(\textbf{-1.6\%})& 86.28 & 83.03 & {87.8} & 76.7 & 80.6\\ \hline \end{tabular} + +\begin{footnotesize} +\vspace{2em} +Variances given in bold font express the quality differences between +HUGO and STABYLO with STC+adaptive parameters. +\end{footnotesize} + \end{center} -\caption{Quality measures of our steganography approach\label{table:quality}} -\end{table} +\caption{Quality measures of steganography approaches\label{table:quality}} +\end{table*} -Let us compare the STABYLO approach with other edge based steganography -schemes with respect to the image quality. -Fist off all, wPSNR and PSNR of the Edge Adaptive -scheme detailed in~\cite{Luo:2010:EAI:1824719.1824720} are lower than ours. -Next both the approaches~\cite{DBLP:journals/eswa/ChenCL10,Chang20101286} -focus on increasing the payload while the PSNR is acceptable, bu do not -give quality metrics for fixed embedding rate from a large base of images. -Our approach outperforms the former thanks to the introduction of the stc -algorithm. +Results are summarized in Table~\ref{table:quality}. +Let us give an interpretation of these experiments. +First of all, the adaptive strategy produces images with lower distortion +than the images resulting from the 10\% fixed strategy. +Numerical results are indeed always greater for the former strategy than +for the latter one. +These results are not surprising since the adaptive strategy aims at +embedding messages whose length is decided according to an higher threshold +into the edge detection. +Let us focus on the quality of HUGO images: with a given fixed +embedding rate (10\%), +HUGO always produces images whose quality is higher than the STABYLO's one. +However our approach is always better than EAISLSBMR since this one may modify +the two least significant bits. -\subsection{Steganalysis} +If we combine \emph{adaptive} and \emph{STC} strategies +(which leads to an average embedding rate equal to 6.35\%) +our approach provides metrics equivalent to those provided by HUGO. +In this column STC(7) stands for embedding data in the LSB whereas +in STC(6), data are hidden in the two last significant bits. -The quality of our approach has been evaluated through the two +The quality variance between HUGO and STABYLO for these parameters +is given in bold font. It is always close to 1\% which confirms +the objective presented in the motivations: +providing an efficient steganography approach with a lightweight manner. + + +Let us now compare the STABYLO approach with other edge based steganography +approaches, namely~\cite{DBLP:journals/eswa/ChenCL10,Chang20101286}. +These two schemes focus on increasing the +payload while the PSNR is acceptable, but do not +give quality metrics for fixed embedding rates from a large base of images. + + + + +\subsection{Steganalysis}\label{sub:steg} + + + +The steganalysis quality of our approach has been evaluated through the two AUMP~\cite{Fillatre:2012:ASL:2333143.2333587} and Ensemble Classifier~\cite{DBLP:journals/tifs/KodovskyFH12} based steganalysers. -Both aims at detecting hidden bits in grayscale natural images and are -considered as the state of the art of steganalysers in spatial domain~\cite{FK12}. +Both aim at detecting hidden bits in grayscale natural images and are +considered as state of the art steganalysers in the spatial domain~\cite{FK12}. The former approach is based on a simplified parametric model of natural images. -Parameters are firstly estimated and a adaptive Asymptotically Uniformly Most Powerful -(AUMP) test is designed (theoretically and practically) to check whether -a natural image has stego content or not. +Parameters are firstly estimated and an adaptive Asymptotically Uniformly Most Powerful +(AUMP) test is designed (theoretically and practically), to check whether +an image has stego content or not. +This approach is dedicated to verify whether LSB has been modified or not. In the latter, the authors show that the -machine learning step, (which is often -implemented as support vector machine) -can be a favourably executed thanks to an Ensemble Classifiers. +machine learning step, which is often +implemented as a support vector machine, +can be favorably executed thanks to an ensemble classifier. - -\begin{table} +\begin{table*} \begin{center} -\begin{tabular}{|c|c|c|c|} +%\begin{small} +\begin{tabular}{|c|c|c|c|c|c|c|c|c|} +\hline +Schemes & \multicolumn{4}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}& \multicolumn{2}{|c|}{EAISLSBMR}\\ \hline -Schemes & \multicolumn{2}{|c|}{STABYLO} & HUGO\\ +Embedding & Fixed & \multicolumn{3}{|c|}{Adaptive (about 6.35\%)} & \multicolumn{2}{|c|}{Fixed}& \multicolumn{2}{|c|}{Fixed} \\ \hline -Embedding rate & Adaptive & 10 \% & 10 \%\\ +Rate & 10\% & + sample & +STC(7) & +STC(6) & 10\%& 6.35\%& 10\%& 6.35\%\\ \hline -AUMP & 0.39 & 0.22 & 0.50 \\ +AUMP & 0.22 & 0.33 & 0.39 & 0.45 & 0.50 & 0.50 & 0.49 & 0.50 \\ \hline -Ensemble Classifier & & & \\ +Ensemble Classifier & 0.35 & 0.44 & 0.47 & 0.47 & 0.48 & 0.49 & 0.43 & 0.46 \\ \hline \end{tabular} +%\end{small} \end{center} \caption{Steganalysing STABYLO\label{table:steganalyse}} -\end{table} +\end{table*} + +Results are summarized in Table~\ref{table:steganalyse}. +First of all, STC outperforms the sample strategy for the two steganalysers, as +already noticed in the quality analysis presented in the previous section. +Next, our approach is more easily detectable than HUGO, which +is the most secure steganographic tool, as far as we know. +However by combining \emph{adaptive} and \emph{STC} strategies +our approach obtains similar results to HUGO ones. -Results show that our approach is more easily detectable than HUGO which is -is the more secure steganography tool, as far we know. However due to its -huge number of features integration, it is not lightweight. +However due to its +huge number of integration features, it is not lightweight, which justifies +in the authors' opinion the consideration of the proposed method.