the methodology than benchmarking.
Our approach is always compared to Hugo~\cite{DBLP:conf/ih/PevnyFB10}
and to EAISLSBMR~\cite{Luo:2010:EAI:1824719.1824720}.
-The former is the less detectable information hiding tool in spatial domain
-and the later is the work which is close to ours, as far as we know.
+The former is the least detectable information hiding tool in spatial domain
+and the later is the work that is close to ours, as far as we know.
-\subsection{Image Quality}\label{sub:quality}
+\subsection{Image quality}\label{sub:quality}
The visual quality of the STABYLO scheme is evaluated in this section.
For the sake of completeness, three metrics are computed in these experiments:
the Peak Signal to Noise Ratio (PSNR),
\end{footnotesize}
\end{center}
-\caption{Quality Measures of Steganography Approaches\label{table:quality}}
+\caption{Quality measures of steganography approaches\label{table:quality}}
\end{table*}
-Results are summarized into the Table~\ref{table:quality}.
+Results are summarized in Table~\ref{table:quality}.
Let us give an interpretation of these experiments.
First of all, the adaptive strategy produces images with lower distortion
than the one of images resulting from the 10\% fixed strategy.
Numerical results are indeed always greater for the former strategy than
-for the latter.
+for the latter one.
These results are not surprising since the adaptive strategy aims at
embedding messages whose length is decided according to an higher threshold
into the edge detection.
If we combine \emph{adaptive} and \emph{STC} strategies
(which leads to an average embedding rate equal to 6.35\%)
our approach provides equivalent metrics than HUGO.
-In this column STC(7) stands for embeding data in the LSB whereas
+In this column STC(7) stands for embedding data in the LSB whereas
in STC(6), data are hidden in the two last significant bits.
The steganalysis quality of our approach has been evaluated through the two
AUMP~\cite{Fillatre:2012:ASL:2333143.2333587}
and Ensemble Classifier~\cite{DBLP:journals/tifs/KodovskyFH12} based steganalysers.
-Both aims at detecting hidden bits in grayscale natural images and are
+Both aim at detecting hidden bits in grayscale natural images and are
considered as the state of the art of steganalysers in spatial domain~\cite{FK12}.
The former approach is based on a simplified parametric model of natural images.
Parameters are firstly estimated and an adaptive Asymptotically Uniformly Most Powerful
This approach is dedicated to verify whether LSB has been modified or not.
In the latter, the authors show that the
machine learning step, which is often
-implemented as support vector machine,
+implemented as a support vector machine,
can be favorably executed thanks to an ensemble classifier.
of spatial least significant bits (LSBs) replacement schemes.
Let us recall that, in this LSBR category, a subset of all the LSBs of the cover image is modified
with a secret bit stream depending on: a secret key, the cover, and the message to embed.
-In this well studied steganographic approach,
+In this well-studied steganographic approach,
if we consider that a LSB is the last bit of each pixel value,
pixels with an even value (resp. an odd value)
are never decreased (resp. increased),
thus such schemes may break the
structural symmetry of the host images.
And these structural alterations can be detected by
-well designed statistical investigations, leading to known steganalysis methods~\cite{DBLP:journals/tsp/DumitrescuWW03,DBLP:conf/mmsec/FridrichGD01,Dumitrescu:2005:LSB:1073170.1073176}.
+well-designed statistical investigations, leading to known steganalysis methods~\cite{DBLP:journals/tsp/DumitrescuWW03,DBLP:conf/mmsec/FridrichGD01,Dumitrescu:2005:LSB:1073170.1073176}.
Let us recall too that this drawback
can be corrected considering the LSB matching (LSBM) subcategory, in which
the $+1$ or $-1$ is randomly added to the cover pixel LSB value
only if this one does not correspond to the secret bit.
%TODO : modifier ceci
-Since it is possible to make that probabilities of increasing or decreasing the pixel value, for instance by considering well encrypted hidden messages, usual statistical approaches
+Since it is possible to make that probabilities of increasing or decreasing the pixel value, for instance by considering well-encrypted hidden messages, usual statistical approaches
cannot be applied here to discover stego-contents in LSBM.
The most accurate detectors for this matching are universal steganalysers such as~\cite{LHS08,DBLP:conf/ih/Ker05,FK12},
which classify images according to extracted features from neighboring elements of residual noise.
the most interesting
approaches being detailed in~\cite{Luo:2010:EAI:1824719.1824720} and
in~\cite{DBLP:journals/eswa/ChenCL10}.
-In the former, the authors presents the Edge Adaptive
+In the former, the authors present the Edge Adaptive
Image Steganography based on LSB matching revisited further denoted as to
EAISLSBMR. This approach selects sharper edge
regions with respect
thanks to extensive experiments.
However, it has been shown that the distinguishing error with LSB embedding is lower than
the one with some binary embedding~\cite{DBLP:journals/tifs/FillerJF11}.
-We thus propose to take benefit of these optimized embedding, provided they are not too time consuming.
+We thus propose to take benefit of these optimized embeddings, provided they are not too time consuming.
In the latter, an hybrid edge detector is presented followed by an ad hoc
embedding.
The Edge detection is computed by combining fuzzy logic~\cite{Tyan1993}
produce stego contents
by only considering the payload, not the type of image signal: the higher the payload is,
the better the approach is said to be.
-Contrarily, we argue that some images should not be taken as a cover because of the nature of their signal.
+Contrarily, we argue that some images should not be taken as a cover because of the nature of their signals.
Consider for instance a uniformly black image: a very tiny modification of its pixels can be easily detectable.
The approach we propose is thus to provide a self adaptive algorithm with a high payload, which depends on the cover signal.
% Message extraction is achieved by computing the same
will not be able to obtain the original message content.
Doing so makes our steganographic protocol, to a certain extend, an asymmetric one.
-To sum up, in this research work, well studied and experimented
+To sum up, in this research work, well-studied and experimented
techniques of signal processing (adaptive edges detection),
coding theory (syndrome-trellis codes), and cryptography
(Blum-Goldwasser encryption protocol) are combined
The remainder of this document is organized as follows.
Section~\ref{sec:ourapproach} presents the details of the proposed steganographic scheme and applies it on a running example.
Section~\ref{sec:experiments} shows experiments on image quality, steganalytic evaluation, complexity of our approach,
-and compares it to the state of the art steganographic schemes.
+and compare them to the state of the art steganographic schemes.
Finally, concluding notes and future work are given in Section~\ref{sec:concl}.
\usepackage{subfig}
\usepackage{color}
\usepackage{mathtools,etoolbox}
+\usepackage{cite}
+
\tolerance=1
\emergencystretch=\maxdimen
%IEEEtran, journal, \LaTeX, paper, template.
-\keywords{Steganography, least-significant-bit (LSB)-based steganography, edge detection, Canny filter, security, syndrome trellis code}
+\keywords{Steganography, least-significant-bit (LSB)-based steganography, edge detection, Canny filter, security, syndrome trellis codes}
\abstracttext{A novel steganographic method called STABYLO is introduced in
this research work.
-Its main advantage for being is to be much lighter than the so-called
-Highly Undetectable steGO (HUGO) scheme, a well known state of the art
+Its main advantage is to be much lighter than the so-called
+Highly Undetectable steGO (HUGO) scheme, a well-known state of the art
steganographic process in spatial domain.
Additionally to this effectiveness,
quite comparable results through noise measures like PSNR-HVS-M,
The STABYLO algorithm, whose acronym means STeganography
with cAnny, Bbs, binarY embedding at LOw cost, has been introduced
in this document as an efficient method having comparable, though
-somewhat smaller, security than the well known
+somewhat smaller, security than the well-known
Highly Undetectable steGO (HUGO) steganographic scheme.
This edge-based steganographic approach embeds a Canny
detection filter, the Blum-Blum-Shub cryptographically secure
for minimizing distortion.
After having introduced with details the proposed method,
we have evaluated it through noise measures (namely, the PSNR, PSNR-HVS-M,
-BIQI, and weighted PSNR) and using well established steganalysers.
+BIQI, and weighted PSNR) and using well-established steganalysers.
% Of course, other detectors like the fuzzy edge methods
% deserve much further attention, which is why we intend
\label{fig:sch:ext}
}%\hfill
\end{center}
- \caption{The STABYLO Scheme.}
+ \caption{The STABYLO scheme}
\label{fig:sch}
\end{figure*}
-\subsection{Security Considerations}\label{sub:bbs}
+\subsection{Security considerations}\label{sub:bbs}
Among methods of message encryption/decryption
(see~\cite{DBLP:journals/ejisec/FontaineG07} for a survey)
we implement the Blum-Goldwasser cryptosystem~\cite{Blum:1985:EPP:19478.19501}
has the property of cryptographical security, \textit{i.e.},
for any sequence of $L$ output bits $x_i$, $x_{i+1}$, \ldots, $x_{i+L-1}$,
there is no algorithm, whose time complexity is polynomial in $L$, and
-which allows to find $x_{i-1}$ and $x_{i+L}$ with a probability greater
+which allows to find $x_{i-1}$ or $x_{i+L}$ with a probability greater
than $1/2$.
Equivalent formulations of such a property can
be found. They all lead to the fact that,
this step computes a message $m$, which is the encrypted version of \textit{mess}.
-\subsection{Edge-Based Image Steganography}\label{sub:edge}
+\subsection{Edge-based image steganography}\label{sub:edge}
The edge-based image
a first-order derivative (gradient magnitude, etc.) is computed
to search for local maxima, whereas in second order ones, zero crossings in a second-order derivative, like the Laplacian computed from the image,
are searched in order to find edges.
-As for as fuzzy edge methods are concerned, they are obviously based on fuzzy logic to highlight edges.
+As far as fuzzy edge methods are concerned, they are obviously based on fuzzy logic to highlight edges.
-Canny filters, on their parts, are an old family of algorithms still remaining a state-of-the-art edge detector. They can be well approximated by first-order derivatives of Gaussians.
+Canny filters, on their parts, are an old family of algorithms still remaining a state of the art edge detector. They can be well-approximated by first-order derivatives of Gaussians.
As the Canny algorithm is well known and studied, fast, and implementable
on many kinds of architectures like FPGAs, smartphones, desktop machines, and
GPUs, we have chosen this edge detector for illustrative purpose.
Let $x$ be the sequence of these bits.
-The next section section presents how our scheme
+The next section presents how our scheme
adapts when the size of $x$ is not sufficient for the message $m$ to embed.
-\subsection{Adaptive Embedding Rate}\label{sub:adaptive}
+\subsection{Adaptive embedding rate}\label{sub:adaptive}
Two strategies have been developed in our scheme,
depending on the embedding rate that is either \emph{adaptive} or \emph{fixed}.
In the former the embedding rate depends on the number of edge pixels.
Canny algorithm with an high threshold.
The message length is thus defined to be less than
half of this set cardinality.
-If $x$ is then to short for $m$, the message is split into sufficient parts
+If $x$ is then too short for $m$, the message is split into sufficient parts
and a new cover image should be used for the remaining part of the message.
-\subsection{Minimizing Distortion with Syndrome-Trellis Codes}\label{sub:stc}
+\subsection{Minimizing distortion with syndrome-trellis codes}\label{sub:stc}
\input{stc}
-\subsection{Data Extraction}\label{sub:extract}
+\subsection{Data extraction}\label{sub:extract}
The message extraction summarized in Fig.~\ref{fig:sch:ext}
follows the data embedding approach
since there exists a reverse function for all its steps.
If the STC approach has been selected in embedding, the STC reverse
algorithm is directly executed to retrieve the encrypted message.
This inverse function takes the $H$ matrix as a parameter.
-Otherwise, \textit{i.e.} if the \emph{sample} strategy is retained,
+Otherwise, \textit{i.e.}, if the \emph{sample} strategy is retained,
the same random bit selection than in the embedding step
is executed with the same seed, given as a key.
Finally, the Blum-Goldwasser decryption function is executed and the original
message is extracted.
-\subsection{Running Example}\label{sub:xpl}
-In this example, the cover image is Lena
+\subsection{Running example}\label{sub:xpl}
+In this example, the cover image is Lena,
which is a $512\times512$ image with 256 grayscale levels.
The message is the poem Ulalume (E. A. Poe), which is constituted by 104 lines, 667
-words, and 3754 characters, \textit{i.e.} 30032 bits.
-Lena and the the first verses are given in Fig.~\ref{fig:lena}.
+words, and 3754 characters, \textit{i.e.}, 30032 bits.
+Lena and the first verses are given in Fig.~\ref{fig:lena}.
\begin{figure}
\begin{center}
\end{figure}
The edge detection returns 18641 and 18455 pixels when $b$ is
-respectively 7 and 6. These edges are represented in Fig.~\ref{fig:edge}
+respectively 7 and 6. These edges are represented in Figure~\ref{fig:edge}.
\begin{figure}[t]
%\label{fig:sch:ext}
}%\hfill
\end{center}
- \caption{Edge Detection wrt $b$.}
+ \caption{Edge detection wrt $b$}
\label{fig:edge}
\end{figure}
Only 9320 bits (resp. 9227 bits) are available for embedding
in the former configuration where $b$ is 7 (resp. where $b$ is 6).
-In the both case, about the third part of the poem is hidden into the cover.
+In both cases, about the third part of the poem is hidden into the cover.
Results with \emph{adaptive+STC} strategy are presented in
Fig.~\ref{fig:lenastego}.
%\label{fig:sch:ext}
}%\hfill
\end{center}
- \caption{Stego Images wrt $b$.}
+ \caption{Stego images wrt $b$}
\label{fig:lenastego}
\end{figure}
Finally, differences between the original cover and the stego images
-are presented in Fig.~\ref{fig:lenadiff}. For each pixel pair of pixel $X_{ij}$ and $Y_{ij}$ ($X$ and $Y$ being the cover and the stego content respectively),
+are presented in Fig.~\ref{fig:lenadiff}. For each pair of pixel $X_{ij}$ and $Y_{ij}$ ($X$ and $Y$ being the cover and the stego content respectively),
the pixel value $V_{ij}$ of the difference is defined with the following map
$$
V_{ij}= \left\{
\end{array}
\right..
$$
-This function allows to emphasize differences between content.
+This function allows to emphasize differences between contents.
\begin{figure}[t]
\begin{center}
%\label{fig:sch:ext}
}%\hfill
\end{center}
- \caption{Differences with Lena's Cover wrt $b$.}
+ \caption{Differences with Lena's cover wrt $b$}
\label{fig:lenadiff}
\end{figure}
by placing a small sub-matrix $\hat{H}$ of size $h × w$ next
to each other and by shifting down by one row.
Thanks to this special form of $H$, one can represent
-every solution of $m=Hy$ as a path through a trellis.
+any solution of $m=Hy$ as a path through a trellis.
Next, the process of finding $y$ consists in two stages: a forward and a backward part.
\begin{enumerate}