state of the art steganography, namely HUGO~\cite{DBLP:conf/ih/PevnyFB10}.
-In what follows, we consider an $n \times n$ square image.
+In what follows, we consider a $n \times n$ square image.
First of all, HUGO starts with computing the second order SPAM Features.
This steps is in $O(n^2 + 2.343^2)$ due to the calculation
of the difference arrays and next of the 686 features (of size 343).
features. Pixels are thus selected according to their ability to provide
an image whose SPAM features are close to the original one.
The algorithm is thus computing a distance between each computed feature,
-andthe original ones
+and the original ones
which is at least in $O(343)$ and an overall distance between these
metrics which is in $O(686)$. Computing the distance is thus in
$O(2\times 343^2)$ and this modification
is thus in $O(2\times 343^2 \times n^2)$.
-Ranking these results may be achieved with a insertion sort which is in
+Ranking these results may be achieved with an insertion sort which is in
$2.n^2 \ln(n)$.
The overall complexity of the pixel selection is thus
$O(n^2 +2.343^2 + 2\times 343^2 \times n^2 + 2.n^2 \ln(n))$, \textit{i.e}
our scheme. For instance, for a squared image with 4M pixel per slide,
this part of our algorithm is more than 14100 faster than Hugo.
-We are then left to express the complexity of the STC algorithm .
+We are then left to express the complexity of the STC algorithm.
According to~\cite{DBLP:journals/tifs/FillerJF11}, it is
in $O(2^h.n)$ where $h$ is the size of the duplicated
matrix. Its complexity is thus negligeable compared with the embedding map
construction.
+
+
+
+
+
Thanks to these complexity result, we claim that STABYLO is lightweight.
grayscale digital image in a RAW format.
We restrict experiments to
this set of cover images since this paper is more focused on
-the methodology than benchmarking.
+the methodology than on benchmarking.
We use the matrices $\hat{H}$
generated by the integers given
in table~\ref{table:matrices:H}
as introduced in~\cite{FillerJF11}, since these ones have experimentally
be proven to have the best modification efficiency.
-For instance if the rate between the size of message and the size of the host is
-1/4, each number in $\{81, 95, 107, 121\}$ is translated into a binary number
-and each one consitutes thus an column of $\hat{H}$.
+For instance if the rate between the size of the message and the size of the
+cover vector
+is 1/4, each number in $\{81, 95, 107, 121\}$ is translated into a binary number
+and each one consitutes thus a column of $\hat{H}$.
\begin{table}
$$
First of all, in our experiments and with the adaptive scheme,
the average size of the message that can be embedded is 16,445 bits.
-Its corresponds to an average payload of 6.35\%.
+It corresponds to an average payload of 6.35\%.
The two other tools will then be compared with this payload.
Sections~\ref{sub:quality} and~\ref{sub:steg} respectively present
the quality analysis and the security of our scheme.
(which leads to an average embedding rate equal to 6.35\%)
our approach provides metrics equivalent to those provided by HUGO.
In this column STC(7) stands for embedding data in the LSB whereas
-in STC(6), data are hidden in the two last significant bits.
+in STC(6), data are hidden in the last two significant bits.
-The steganalysis quality of our approach has been evaluated through the two
-AUMP~\cite{Fillatre:2012:ASL:2333143.2333587}
-and Ensemble Classifier~\cite{DBLP:journals/tifs/KodovskyFH12} based steganalysers.
-Both aim at detecting hidden bits in grayscale natural images and are
+The steganalysis quality of our approach has been evaluated through the % two
+% AUMP~\cite{Fillatre:2012:ASL:2333143.2333587}
+% and
+Ensemble Classifier~\cite{DBLP:journals/tifs/KodovskyFH12} based steganalyser.
+This approach aims at detecting hidden bits in grayscale natural
+images and is
considered as state of the art steganalysers in the spatial domain~\cite{FK12}.
-The former approach is based on a simplified parametric model of natural images.
-Parameters are firstly estimated and an adaptive Asymptotically Uniformly Most Powerful
-(AUMP) test is designed (theoretically and practically), to check whether
-an image has stego content or not.
-This approach is dedicated to verify whether LSB has been modified or not.
-In the latter, the authors show that the
-machine learning step, which is often
-implemented as a support vector machine,
-can be favorably executed thanks to an ensemble classifier.
+%The former approach is based on a simplified parametric model of natural images.
+% Parameters are firstly estimated and an adaptive Asymptotically Uniformly Most Powerful
+% (AUMP) test is designed (theoretically and practically), to check whether
+% an image has stego content or not.
+% This approach is dedicated to verify whether LSB has been modified or not.
+% , the authors show that the
+% machine learning step, which is often
+% implemented as a support vector machine,
+% can be favorably executed thanks to an ensemble classifier.
\begin{table*}
\hline
Rate & 10\% & + sample & +STC(7) & +STC(6) & 10\%& 6.35\%& 10\%& 6.35\%\\
\hline
-AUMP & 0.22 & 0.33 & 0.39 & 0.45 & 0.50 & 0.50 & 0.49 & 0.50 \\
-\hline
+%AUMP & 0.22 & 0.33 & 0.39 & 0.45 & 0.50 & 0.50 & 0.49 & 0.50 \\
+%\hline
Ensemble Classifier & 0.35 & 0.44 & 0.47 & 0.47 & 0.48 & 0.49 & 0.43 & 0.46 \\
\hline
Results are summarized in Table~\ref{table:steganalyse}.
-First of all, STC outperforms the sample strategy for the two steganalysers, as
+First of all, STC outperforms the sample strategy %for % the two steganalysers
+ as
already noticed in the quality analysis presented in the previous section.
Next, our approach is more easily detectable than HUGO, which
is the most secure steganographic tool, as far as we know.
However by combining \emph{adaptive} and \emph{STC} strategies
our approach obtains similar results to HUGO ones.
+%%%%et pour b= 6 ?
+
+
+Compared to EAILSBMR, we obtain better results when the strategy is
+\emph{adaptive}.
However due to its
huge number of integration features, it is not lightweight, which justifies
in the authors' opinion the consideration of the proposed method.
-This research work takes place in the field of information hiding, considerably developed
-these last two decades. The proposed method for
+This research work takes place in the field of information hiding, considerably developed for the last two decades. The proposed method for
steganography considers digital images as covers.
It belongs to the well-known large category
of spatial least significant bits (LSBs) replacement schemes.
-Let us recall that, in this LSBR category, a subset of all the LSBs of the cover image is modified
+Let us recall that, in this LSB replacement category, a subset of all the LSBs of the cover image is modified
with a secret bit stream depending on: a secret key, the cover, and the message to embed.
In this well-studied steganographic approach,
if we consider that a LSB is the last bit of each pixel value,
thus such schemes may break the
structural symmetry of the host images.
And these structural alterations can be detected by
-well-designed statistical investigations, leading to known steganalysis methods~\cite{DBLP:journals/tsp/DumitrescuWW03,DBLP:conf/mmsec/FridrichGD01,Dumitrescu:2005:LSB:1073170.1073176}.
+well-designed statistical investigations, leading to well
+known steganalysis methods~\cite{DBLP:journals/tsp/DumitrescuWW03,DBLP:conf/mmsec/FridrichGD01,Dumitrescu:2005:LSB:1073170.1073176}.
Let us recall too that this drawback
can be corrected considering the LSB matching (LSBM) subcategory, in which
the $+1$ or $-1$ is randomly added to the cover pixel LSB value
only if this one does not correspond to the secret bit.
%TODO : modifier ceci
-By considering well-encrypted hidden messages, probabilities of increasing or of decreasing value of pixels are equal. Then usual statistical approaches
+By considering well-encrypted hidden messages, the probabilities of increasing or decreasing the alue of pixels are equal. Then usual statistical approaches
cannot be applied here to discover stego-contents in LSBM.
The most accurate detectors for this matching are universal steganalysers such as~\cite{LHS08,DBLP:conf/ih/Ker05,FK12},
which classify images according to extracted features from neighboring elements of residual noise.
modification minimizes a distortion function.
This distortion may be computed thanks to feature vectors that are embedded for instance in the steganalysers
referenced above.
-Highly Undetectable steGO (HUGO) method~\cite{DBLP:conf/ih/PevnyFB10} is one of the most efficient instance of such a scheme.
+The Highly Undetectable steGO (HUGO) method~\cite{DBLP:conf/ih/PevnyFB10} is one of the most efficient instance of such a scheme.
It takes into account so-called SPAM features
%(whose size is larger than $10^7$)
to avoid overfitting a particular
Image Steganography based on LSB matching revisited further denoted as
EAISLSBMR. This approach selects sharper edge
regions with respect
-to a given embedding rate: the larger the number of bits to be embedded, the coarser
+to a given embedding rate: the larger the number of bits to be embedded is, the coarser
the edge regions are.
Then the data hiding algorithm is achieved by applying LSBMR on some of the pixels of these regions.
The authors show that their proposed method is more efficient than all the LSB, LSBM, and LSBMR approaches
In the latter, an hybrid edge detector is presented followed by an ad hoc
embedding.
The Edge detection is computed by combining fuzzy logic~\cite{Tyan1993}
-and Canny~\cite{Canny:1986:CAE:11274.11275} approaches. The goal of this combination
+and Canny~\cite{Canny:1986:CAE:11274.11275} approaches.
+The goal of this combination
is to enlarge the set of modified bits to increase the payload of the data hiding scheme.
Section~\ref{sec:ourapproach} presents the details of the proposed steganographic scheme and applies it on a running example. Among its technical description,
its adaptive aspect is emphasized.
Section~\ref{sub:complexity} presents the overall complexity of our approach
-and compare it to the HUGO's one.
-Section~\ref{sec:experiments} shows experiments on image quality, steganalysis evaluation, and compare them to the state of the art steganographic schemes.
+and compares it to the HUGO's one.
+Section~\ref{sec:experiments} shows experiments on image quality, steganalysis evaluation, and compares them to the state of the art steganographic schemes.
Finally, concluding notes and future work are given in Section~\ref{sec:concl}.
Computer Science Laboratory DISC,
University of Franche-Comt\'{e},
Besan\c con, France.}
-\email{\{raphael.couturier, jean-francois.couchot, christophe.guyeux\}@univ-fcomte.fr}
+\email{\{jean-francois.couchot, raphael.couturier, christophe.guyeux\}@univ-fcomte.fr}
\shortauthors{J.-F. Couchot, R. Couturier, and C. Guyeux}
\begin{abstract}
-A novel steganographic method called STABYLO is introduced in
+A new steganographic method called STABYLO is introduced in
this research work.
Its main advantage is to be much lighter than the so-called
Highly Undetectable steGO (HUGO) scheme, a well-known state of the art
\subsection{Security considerations}\label{sub:bbs}
-Among methods of the message encryption/decryption
+Among the methods of message encryption/decryption
(see~\cite{DBLP:journals/ejisec/FontaineG07} for a survey)
we implement the Blum-Goldwasser cryptosystem~\cite{Blum:1985:EPP:19478.19501}
that is based on the Blum Blum Shub~\cite{DBLP:conf/crypto/ShubBB82}
Many techniques have been proposed in the literature to detect
edges in images (whose noise has been initially reduced).
-They can be separated in two categories: first and second order detection
+They can be separated into two categories: first and second order detection
methods on the one hand, and fuzzy detectors on the other hand~\cite{KF11}.
In first order methods like Sobel, Canny~\cite{Canny:1986:CAE:11274.11275}, \ldots,
a first-order derivative (gradient magnitude, etc.) is computed
a set of edge pixels related to a threshold that is decreasing
until its cardinality
is sufficient. Even in this situation, our scheme is adapting
-its algorithm to met all the user requirements.
+its algorithm to meet all the user's requirements.
Once the map of possibly modified pixels is computed,
\begin{flushleft}
\begin{scriptsize}
The skies they were ashen and sober;\linebreak
-$~$ The leaves they were crisped and sere—\linebreak
-$~$ The leaves they were withering and sere;\linebreak
+$\qquad$ The leaves they were crisped and sere—\linebreak
+$\qquad$ The leaves they were withering and sere;\linebreak
It was night in the lonesome October\linebreak
-$~$ Of my most immemorial year;\linebreak
+$\qquad$ Of my most immemorial year;\linebreak
It was hard by the dim lake of Auber,\linebreak
-$~$ In the misty mid region of Weir—\linebreak
+$\qquad$ In the misty mid region of Weir—\linebreak
It was down by the dank tarn of Auber,\linebreak
-$~$ In the ghoul-haunted woodland of Weir.
+$\qquad$ In the ghoul-haunted woodland of Weir.
\end{scriptsize}
\end{flushleft}
\end{minipage}
The edge detection returns 18,641 and 18,455 pixels when $b$ is
respectively 7 and 6. These edges are represented in Figure~\ref{fig:edge}.
-
+When $b$ is 7, it remains one bit per pixel to build the cover vector.
+in this configuration, this leads to a cover vector of size 18,641 and
+36,910 if $b$ is 6.
\begin{figure}[t]
\begin{center}
-Only 9,320 bits (resp. 9,227 bits) are available for embedding
+The STC algorithm is optimized when the rate between message length and
+cover vector length is less than 1/2.
+So, only 9,320 bits (resp. 18,455 bits) are available for embedding
in the former configuration where $b$ is 7 (resp. where $b$ is 6).
-In both cases, about the third part of the poem is hidden into the cover.
+In the first cases, about the third part of the poem is hidden into the cover
+whereas the latter allows to embed more than the half part of it.
Results with \emph{adaptive+STC} strategy are presented in
Fig.~\ref{fig:lenastego}.