X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/canny.git/blobdiff_plain/015cc4483b5fad96f800f465951715ed02163ffc..ef06623aa40e69d9e5332208f9ead5af2e7ea4b6:/experiments.tex?ds=inline diff --git a/experiments.tex b/experiments.tex index 776a5a5..6660072 100644 --- a/experiments.tex +++ b/experiments.tex @@ -1,27 +1,47 @@ -For whole experiments, the whole set of 10,000 images +For all the experiments, the whole set of 10,000 images of the BOSS contest~\cite{Boss10} database is taken. In this set, each cover is a $512\times 512$ grayscale digital image in a RAW format. We restrict experiments to this set of cover images since this paper is more focused on -the methodology than benchmarking. -We use the matrices given in table~\ref{table:matrices:H} -as introduced in~\cite{}, since these ones have experimentally +the methodology than on benchmarks. + +We use the matrices $\hat{H}$ +generated by the integers given +in Table~\ref{table:matrices:H} +as introduced in~\cite{FillerJF11}, since these ones have experimentally be proven to have the best modification efficiency. +For instance if the rate between the size of the message and the size of the +cover vector +is 1/4, each number in $\{81, 95, 107, 121\}$ is translated into a binary number +and each one constitutes thus a column of $\hat{H}$. \begin{table} $$ \begin{array}{|l|l|} -\textrm{rate} & \textrm{matrix generators} \\ -$\frac{1}{2} & \{71,109\} -$\frac{1}{3} & \{95, 101, 121\} -$\frac{1}{4} & \{81, 95, 107, 121\} -$\frac{1}{5} & \{75, 95, 97, 105, 117\} -$\frac{1}{6} & \{73, 83, 95, 103, 109, 123\} -$\frac{1}{7} & \{69, 77, 93, 107, 111, 115, 121\} -$\frac{1}{8} & \{69, 79, 81, 89, 93, 99, 107, 119\} -$\frac{1}{9} & \{69, 79, 81, 89, 93, 99, 107, 119, 125] - +\hline +\textrm{Rate} & \textrm{Matrix generators} \\ +\hline +{1}/{2} & \{71,109\}\\ +\hline +{1}/{3} & \{95, 101, 121\}\\ +\hline +{1}/{4} & \{81, 95, 107, 121\}\\ +\hline +{1}/{5} & \{75, 95, 97, 105, 117\}\\ +\hline +{1}/{6} & \{73, 83, 95, 103, 109, 123\}\\ +\hline +{1}/{7} & \{69, 77, 93, 107, 111, 115, 121\}\\ +\hline +{1}/{8} & \{69, 79, 81, 89, 93, 99, 107, 119\}\\ +\hline +{1}/{9} & \{69, 79, 81, 89, 93, 99, 107, 119, 125\}\\ +\hline +\end{array} +$$ +\caption{Matrix Generator for $\hat{H}$ in STC}\label{table:matrices:H} +\end{table} Our approach is always compared to Hugo~\cite{DBLP:conf/ih/PevnyFB10} @@ -33,7 +53,7 @@ and the latter is the work that is the closest to ours, as far as we know. First of all, in our experiments and with the adaptive scheme, the average size of the message that can be embedded is 16,445 bits. -Its corresponds to an average payload of 6.35\%. +It corresponds to an average payload of 6.35\%. The two other tools will then be compared with this payload. Sections~\ref{sub:quality} and~\ref{sub:steg} respectively present the quality analysis and the security of our scheme. @@ -65,24 +85,25 @@ If $b$ is 6, these values are respectively equal to \begin{table*} \begin{center} -\begin{tabular}{|c|c|c||c|c|c|c|c|c|} +\begin{small} +\begin{tabular}{|c|c|c||c|c|c|c|c|c|c|c|c|c|} \hline -Schemes & \multicolumn{4}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}& \multicolumn{2}{|c|}{EAISLSBMR} \\ +Schemes & \multicolumn{4}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}& \multicolumn{2}{|c|}{EAISLSBMR} & \multicolumn{2}{|c|}{WOW} & \multicolumn{2}{|c|}{UNIWARD}\\ \hline -Embedding & Fixed & \multicolumn{3}{|c|}{Adaptive (about 6.35\%)} & \multicolumn{2}{|c|}{Fixed}& \multicolumn{2}{|c|}{Fixed} \\ +Embedding & Fixed & \multicolumn{3}{|c|}{Adaptive (about 6.35\%)} & Fixed &Adaptive & Fixed &Adaptive & Fixed &Adaptive & Fixed &Adaptive \\ \hline -Rate & 10\% & + sample & +STC(7) & +STC(6) & 10\%&6.35\%& 10\%&6.35\%\\ +Rate & 10\% & + sample & +STC(7) & +STC(6) & 10\%&6.35\%& 10\%&6.35\%& 10\%&6.35\%& 10\%&6.35\%\\ \hline -PSNR & 61.86 & 63.48 & 66.55 (\textbf{-0.8\%}) & 63.7 & 64.65 & {67.08} & 60.8 & 62.9\\ +PSNR & 61.86 & 63.48 & 66.55 (\textbf{-0.8\%}) & 63.7 & 64.65 & {67.08} & 60.8 & 62.9&65.9 & 68.3 & 65.8 & 69.2\\ \hline PSNR-HVS-M & 72.9 & 75.39 & 78.6 (\textbf{-0.8\%}) & 75.5 & 76.67 & {79.23} & 71.8 & 74.3\\ %\hline %BIQI & 28.3 & 28.28 & 28.4 & 28.28 & 28.28 & 28.2 & 28.2\\ \hline -wPSNR & 77.47 & 80.59 & 86.43(\textbf{-1.6\%})& 86.28 & 83.03 & {87.8} & 76.7 & 80.6\\ +wPSNR & 77.47 & 80.59 & 86.43(\textbf{-1.6\%})& 86.28 & 83.03 & {88.6} & 76.7 & 83& 83.8 & 90.4 & 85.2 & 91.9\\ \hline \end{tabular} - +\end{small} \begin{footnotesize} \vspace{2em} Variances given in bold font express the quality differences between @@ -114,14 +135,14 @@ If we combine \emph{adaptive} and \emph{STC} strategies (which leads to an average embedding rate equal to 6.35\%) our approach provides metrics equivalent to those provided by HUGO. In this column STC(7) stands for embedding data in the LSB whereas -in STC(6), data are hidden in the two last significant bits. +in STC(6), data are hidden in the last two significant bits. The quality variance between HUGO and STABYLO for these parameters is given in bold font. It is always close to 1\% which confirms the objective presented in the motivations: -providing an efficient steganography approach with a lightweight manner. +providing an efficient steganography approach in a lightweight manner. Let us now compare the STABYLO approach with other edge based steganography @@ -137,54 +158,61 @@ give quality metrics for fixed embedding rates from a large base of images. -The steganalysis quality of our approach has been evaluated through the two -AUMP~\cite{Fillatre:2012:ASL:2333143.2333587} -and Ensemble Classifier~\cite{DBLP:journals/tifs/KodovskyFH12} based steganalysers. -Both aim at detecting hidden bits in grayscale natural images and are +The steganalysis quality of our approach has been evaluated through the % two +% AUMP~\cite{Fillatre:2012:ASL:2333143.2333587} +% and +Ensemble Classifier~\cite{DBLP:journals/tifs/KodovskyFH12} based steganalyser. +This approach aims at detecting hidden bits in grayscale natural +images and is considered as state of the art steganalysers in the spatial domain~\cite{FK12}. -The former approach is based on a simplified parametric model of natural images. -Parameters are firstly estimated and an adaptive Asymptotically Uniformly Most Powerful -(AUMP) test is designed (theoretically and practically), to check whether -an image has stego content or not. -This approach is dedicated to verify whether LSB has been modified or not. -In the latter, the authors show that the -machine learning step, which is often -implemented as a support vector machine, -can be favorably executed thanks to an ensemble classifier. +%The former approach is based on a simplified parametric model of natural images. +% Parameters are firstly estimated and an adaptive Asymptotically Uniformly Most Powerful +% (AUMP) test is designed (theoretically and practically), to check whether +% an image has stego content or not. +% This approach is dedicated to verify whether LSB has been modified or not. +% , the authors show that the +% machine learning step, which is often +% implemented as a support vector machine, +% can be favorably executed thanks to an ensemble classifier. \begin{table*} \begin{center} -%\begin{small} -\begin{tabular}{|c|c|c|c|c|c|c|c|c|} -\hline -Schemes & \multicolumn{4}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}& \multicolumn{2}{|c|}{EAISLSBMR}\\ +\begin{small} +\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline -Embedding & Fixed & \multicolumn{3}{|c|}{Adaptive (about 6.35\%)} & \multicolumn{2}{|c|}{Fixed}& \multicolumn{2}{|c|}{Fixed} \\ +Schemes & \multicolumn{4}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}& \multicolumn{2}{|c|}{EAISLSBMR} & \multicolumn{2}{|c|}{WOW} & \multicolumn{2}{|c|}{UNIWARD}\\ \hline -Rate & 10\% & + sample & +STC(7) & +STC(6) & 10\%& 6.35\%& 10\%& 6.35\%\\ +Embedding & Fixed & \multicolumn{3}{|c|}{Adaptive (about 6.35\%)} & Fixed & Adapt. & Fixed & Adapt. & Fixed & Adapt. & Fixed & Adapt. \\ \hline -AUMP & 0.22 & 0.33 & 0.39 & 0.45 & 0.50 & 0.50 & 0.49 & 0.50 \\ +Rate & 10\% & + sample & +STC(7) & +STC(6) & 10\%& 6.35\%& 10\%& 6.35\% & 10\%& 6.35\%& 10\%& 6.35\%\\ \hline -Ensemble Classifier & 0.35 & 0.44 & 0.47 & 0.47 & 0.48 & 0.49 & 0.43 & 0.46 \\ +%AUMP & 0.22 & 0.33 & 0.39 & 0.45 & 0.50 & 0.50 & 0.49 & 0.50 \\ +%\hline +Ensemble Classifier & 0.35 & 0.44 & 0.47 & 0.47 & 0.48 & 0.49 & 0.43 & 0.47 & 0.48 & 0.49 & 0.46 & 0.49 \\ \hline \end{tabular} -%\end{small} +\end{small} \end{center} \caption{Steganalysing STABYLO\label{table:steganalyse}} \end{table*} Results are summarized in Table~\ref{table:steganalyse}. -First of all, STC outperforms the sample strategy for the two steganalysers, as +First of all, STC outperforms the sample strategy %for % the two steganalysers + as already noticed in the quality analysis presented in the previous section. Next, our approach is more easily detectable than HUGO, which is the most secure steganographic tool, as far as we know. However by combining \emph{adaptive} and \emph{STC} strategies our approach obtains similar results to HUGO ones. +%%%%et pour b= 6 ? + + +Compared to EAILSBMR, we obtain better results when the strategy is +\emph{adaptive}. However due to its huge number of integration features, it is not lightweight, which justifies in the authors' opinion the consideration of the proposed method. -