Two strategies have been developed in our scheme, depending on the embedding rate that is either \emph{adaptive} or \emph{fixed}.
In the former the embedding rate depends on the number of edge pixels.
-The higher it is, the larger is the message length that can be inserted.
+The higher it is, the larger the message length that can be inserted is.
Practically, a set of edge pixels is computed according to the
Canny algorithm with an high threshold.
-The message length is thus defined to be the half of this set cardinality.
+The message length is thus defined to be half of this set cardinality.
In this strategy, two methods are thus applied to extract bits that
are modified. The first one is a direct application of the STC algorithm.
-This method is further referred as \emph{adaptive+STC}.
+This method is further referred to as \emph{adaptive+STC}.
The second one randomly chooses the subset of pixels to modify by
applying the BBS PRNG again. This method is denoted \emph{adaptive+sample}.
Notice that the rate between
available bits and bit message length is always equal to 2.
This constraint is indeed induced by the fact that the efficiency
of the STC algorithm is unsatisfactory under that threshold.
-On our experiments and with the adaptive scheme,
-the average size of the message that can be embedded is 16445.
+In our experiments and with the adaptive scheme,
+the average size of the message that can be embedded is 16,445 bits.
Its corresponds to an average payload of 6.35\%.
In the latter, the embedding rate is defined as a percentage between the
number of modified pixels and the length of the bit message.
This is the classical approach adopted in steganography.
-Practically, the Canny algorithm generates a
-a set of edge pixels with threshold that is decreasing until its cardinality
+Practically, the Canny algorithm generates
+a set of edge pixels related to a threshold that is decreasing until its cardinality
is sufficient. If the set cardinality is more than twice larger than the
bit message length, a STC step is again applied.
Otherwise, pixels are again randomly chosen with BBS.
account the Human Visual System (HVS).
The other ones have been designed to tackle this problem.
-\begin{table}
+\begin{table*}
\begin{center}
-\begin{tabular}{|c|c|c||c|c|}
+\begin{tabular}{|c|c|c||c|c|c|}
\hline
-Schemes & \multicolumn{3}{|c|}{STABYLO} & HUGO\\
+Schemes & \multicolumn{3}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}\\
\hline
-Embedding & \multicolumn{2}{|c||}{Adaptive} & Fixed & Fixed \\
+Embedding & \multicolumn{2}{|c||}{Adaptive} & Fixed & \multicolumn{2}{|c|}{Fixed} \\
\hline
-Rate & + STC & + sample & 10\% & 10\%\\
+Rate & + STC & + sample & 10\% & 10\%&6.35\%\\
\hline
-PSNR & 66.55 & 63.48 & 61.86 & 64.65 \\
+PSNR & 66.55 & 63.48 & 61.86 & 64.65 & 67.08 \\
\hline
-PSNR-HVS-M & 78.6 & 75.39 & 72.9 & 76.67\\
+PSNR-HVS-M & 78.6 & 75.39 & 72.9 & 76.67 & 79.23 \\
\hline
-BIQI & 28.3 & 28.28 & 28.4 & 28.28\\
+BIQI & 28.3 & 28.28 & 28.4 & 28.28 & 28.28 \\
\hline
-wPSNR & 86.43& 80.59 & 77.47& 83.03\\
+wPSNR & 86.43& 80.59 & 77.47& 83.03 & 87.8\\
\hline
\end{tabular}
\end{center}
\caption{Quality Measures of Steganography Approaches\label{table:quality}}
-\end{table}
+\end{table*}
Let us give an interpretation of these experiments.
First of all, the adaptive strategy produces images with lower distortion
Let us focus on the quality of HUGO images: with a given fixed
embedding rate (10\%),
HUGO always produces images whose quality is higher than the STABYLO's one.
-However, our approach nevertheless provides better results with the strategy
-\emph{adaptive+STC} in a lightweight manner, as motivated in the introduction.
+However, our approach nevertheless provides equivalent
+results with the strategy
+\emph{adaptive+STC} than HUGO with an average embedding rate set to
+6.35\%.
+This occurs with a lightweight manner, as motivated in the introduction.
Let us now compare the STABYLO approach with other edge based steganography
executed with a 10\% embedding rate
has the same PSNR but a lower wPSNR than ours:
these two metrics are respectively equal to 61.9 and 68.9.
-Next, both the approaches~\cite{DBLP:journals/eswa/ChenCL10,Chang20101286}
+Next, both approaches~\cite{DBLP:journals/eswa/ChenCL10,Chang20101286}
focus on increasing the payload while the PSNR is acceptable, but do not
-give quality metrics for fixed embedding rate from a large base of images.
+give quality metrics for fixed embedding rates from a large base of images.
Our approach outperforms the former thanks to the introduction of the STC
algorithm.
-\begin{table}
+\begin{table*}
\begin{center}
-\begin{tabular}{|c|c|c|c|c|}
+%\begin{small}
+\begin{tabular}{|c|c|c|c|c|c|}
\hline
-Schemes & \multicolumn{3}{|c|}{STABYLO} & HUGO\\
+Schemes & \multicolumn{3}{|c|}{STABYLO} & \multicolumn{2}{|c|}{HUGO}\\
\hline
-Embedding & \multicolumn{2}{|c|}{Adaptive} & Fixed & Fixed \\
+Embedding & \multicolumn{2}{|c|}{Adaptive} & Fixed & \multicolumn{2}{|c|}{Fixed} \\
\hline
-Rate & + STC & + sample & 10\% & 10\%\\
+Rate & + STC & + sample & 10\% & 10\%& 6.35\%\\
\hline
-AUMP & 0.39 & 0.33 & 0.22 & 0.50 \\
+AUMP & 0.39 & 0.33 & 0.22 & 0.50 & 0.50 \\
\hline
-Ensemble Classifier & 0.47 & 0.44 & 0.35 & 0.48 \\
+Ensemble Classifier & 0.47 & 0.44 & 0.35 & 0.48 & 0.49 \\
\hline
\end{tabular}
+%\end{small}
\end{center}
\caption{Steganalysing STABYLO\label{table:steganalyse}}
-\end{table}
+\end{table*}
Results show that our approach is more easily detectable than HUGO, which
is the most secure steganographic tool, as far as we know. However due to its
huge number of features integration, it is not lightweight, which justifies
-in authors' opinion the consideration of the proposed method.
+in the authors' opinion the consideration of the proposed method.
-This research work takes place into the field of information hiding, considerably developed
+This research work takes place in the field of information hiding, considerably developed
these last two decades. The proposed method for
steganography considers digital images as covers.
-It belongs in the well investigated large category
+It belongs to the well-known large category
of spatial least significant bits (LSBs) replacement schemes.
Let us recall that, in this LSBR category, a subset of all the LSBs of the cover image is modified
with a secret bit stream depending on: a secret key, the cover, and the message to embed.
can be corrected considering the LSB matching (LSBM) subcategory, in which
the $+1$ or $-1$ is randomly added to the cover pixel LSB value
only if this one does not correspond to the secret bit.
-Since it is possible to make that probabilities of increasing or decreasing the pixel value are the same, for instance by considering well encrypted hidden messages, usual statistical approaches
+%TODO : modifier ceci
+Since it is possible to make that probabilities of increasing or decreasing the pixel value, for instance by considering well encrypted hidden messages, usual statistical approaches
cannot be applied here to discover stego-contents in LSBM.
The most accurate detectors for this matching are universal steganalysers such as~\cite{LHS08,DBLP:conf/ih/2005,FK12},
which classify images according to extracted features from neighboring elements of residual noise.
It takes into account so-called SPAM features
%(whose size is larger than $10^7$)
to avoid overfitting a particular
-steganalyser. Thus a distortion measure for each pixel is individually determined as the sum of differences between
-features of SPAM computed from the cover and from the stego images.
+steganalyser. Thus a distortion measure for each pixel is individually determined as the sum of the differences between
+the features of the SPAM computed from the cover and from the stego images.
Thanks to this features set, HUGO allows to embed $7\times$ longer messages with the same level of
indetectability than LSB matching.
However, this improvement is time consuming, mainly due to the distortion function
There remains a large place between random selection of LSB and feature based modification of pixel values.
We argue that modifying edge pixels is an acceptable compromise.
-Edges form the outline of an object: they are the boundary between overlapping objects or between an object
-and the background. When producing the stego-image, a small modification of some pixel values in such edges should not impact the image quality, which is a requirement when
+Edges form the outline of an object: they are the boundaries between overlapping objects or between an object
+and its background. When producing the stego-image, a small modification of some pixel values in such edges should not impact the image quality, which is a requirement when
attempting to be undetectable. Indeed,
-in cover image, edges already break the continuity of pixels' intensity map and thus already present large variations with theirs neighboring
-pixels. In other words, minor changes in regular area are more dramatic than larger modifications in edge ones.
+in a cover image, edges already break the continuity of pixels' intensity map and thus already present large variations with their neighboring
+pixels. In other words, minor changes in regular areas are more dramatic than larger modifications in edge ones.
Our first proposal is thus to embed message bits into edge shapes while preserving other smooth regions.
-Edge based steganographic schemes have been already studied, the most interesting
+Edge based steganographic schemes have already been studied, the most interesting
approaches being detailed in~\cite{Luo:2010:EAI:1824719.1824720} and \cite{DBLP:journals/eswa/ChenCL10}.
In the former, the authors show how to select sharper edge regions with respect
to a given embedding rate: the larger the number of bits to be embedded, the coarser
message encryption stage, to be certain that,
even in the worst case scenario, the attacker
will not be able to obtain the message content.
-Doing so makes our steganographic protocol, in a certain extend, an asymmetric one.
+Doing so makes our steganographic protocol, to a certain extend, an asymmetric one.
To sum up, in this research work, well studied and experimented
techniques of signal processing (adaptive edges detection),
coding theory (syndrome-treillis codes), and cryptography
(Blum-Goldwasser encryption protocol) are combined
to compute an efficient steganographic
-scheme, whose principal characteristics is to take into
+scheme, whose principal characteristic is to take into
consideration the cover image and to be compatible with small computation resources.
The remainder of this document is organized as follows.
\begin{abstract}
A novel steganographic method called STABYLO is introduced in this research work.
-Its main reason for being is to be much lighter than the so-called
+Its main avantage for being is to be much lighter than the so-called
Highly Undetectable steGO (HUGO) method, a well known state of the art
steganographic process. Additionally to this effectiveness,
quite comparable results through noise measures like PSNR-HVS-M,
all the existing edge detection methods, to see if the STABYLO evaluation scores can
be improved by replacing Canny with another edge filter. We will try
to take into account the least significant bits too during all the
-stages of the algorithm, hoping by doing so to be more close to the HUGO scores against
+stages of the algorithm, hoping by doing so to be closer to the HUGO scores against
steganalyzers. Other steganalyzers than the ones used in this document will be
-regarded for the sake of completeness. Finally, the
+examined for the sake of completeness. Finally, the
systematic replacement of all the LSBs of edges by binary digits provided
by the BBS generator will be investigated, and the consequences of such a
replacement, in terms of security, will be discussed.
The flowcharts given in Fig.~\ref{fig:sch} summarize our steganography scheme denoted by
STABYLO, which stands for STeganography with cAnny, Bbs, binarY embedding at LOw cost.
-What follows successively details all the inner steps and flows inside
+What follows are successively details of the inner steps and flows inside
both the embedding stage (Fig.~\ref{fig:sch:emb})
and the extraction one (Fig.~\ref{fig:sch:ext}).
Many techniques have been proposed in the literature to detect
edges in images (whose noise has been initially reduced).
They can be separated in two categories: first and second order detection
-methods on the one hand, and fuzzy detectors in the second hand~\cite{KF11}.
+methods on the one hand, and fuzzy detectors on the other hand~\cite{KF11}.
In first order methods like Sobel,
a first-order derivative (gradient magnitude, etc.) is computed
to search for local maxima, whereas in second order ones, zero crossings in a second-order derivative, like the Laplacian computed from the image,
are searched in order to find edges.
-For fuzzy edge methods, they are obviously based on fuzzy logic to highlight
+As for as fuzzy edge methods are concerned, they are obviously based on fuzzy logic to highlight
edges.
Canny filters, on their parts, are an old family of algorithms still remaining a state-of-the-art edge detector. They can be well approximated by first-order derivatives of Gaussians.
%%
%accurate idea on what would produce such algorithm compared to another.
%That is
%why we have chosen
-As Canny algorithm is well known and studied, fast, and implementable
+As the Canny algorithm is well known and studied, fast, and implementable
on many kinds of architectures like FPGAs, smartphones, desktop machines, and
GPUs, we have chosen this edge detector for illustrative purpose.
Of course, other detectors like the fuzzy edge methods
-merit much further attention, which is why we intend
+deserve much further attention, which is why we intend
to investigate systematically all of these detectors in our next work.
%we do not pretend that this is the best solution.
that is based on the Blum Blum Shub~\cite{DBLP:conf/crypto/ShubBB82} pseudorandom number generator (PRNG)
for security reasons.
It has been indeed proven~\cite{DBLP:conf/crypto/ShubBB82} that this PRNG
-has the cryptographically security property, \textit{i.e.},
+has the property of cryptographical security, \textit{i.e.},
for any sequence of $L$ output bits $x_i$, $x_{i+1}$, \ldots, $x_{i+L-1}$,
there is no algorithm, whose time complexity is polynomial in $L$, and
which allows to find $x_{i-1}$ and $x_{i+L}$ with a probability greater
\subsection{Data Extraction}
-Message extraction summarized in Fig.~\ref{fig:sch:ext} follows data embedding
+The message extraction summarized in Fig.~\ref{fig:sch:ext} follows data embedding
since there exists a reverse function for all its steps.
First of all, the same edge detection is applied (on the 7 first bits) to
get the set of LSBs,
To make this article self-contained, this section recalls
-basis of the Syndrome Treillis Codes (STC).
+the basis of the Syndrome Treillis Codes (STC).
Let
$x=(x_1,\ldots,x_n)$ be the $n$-bits cover vector of the image $X$,
$m$ be the message to embed, and
$m$ for a given binary matrix $H$.
Let us explain this embedding on a small illustrative example where
-$\rho_X(i,x,y)$ is identically equal to 1,
+$\rho_X(i,x,y)$ is equal to 1,
whereas $m$ and $x$ are respectively a 3 bits column
vector and a 7 bits column vector.
Let then $H$ be the binary Hamming matrix
First of all, Filler \emph{et al.} compute the matrix $H$
by placing a small sub-matrix $\hat{H}$ of size $h × w$ next
-to each other and shifted down by one row.
+to each other and by shifting down by one row.
Thanks to this special form of $H$, one can represent
every solution of $m=Hy$ as a path through a trellis.
-Next, the process of finding $y$ consists of two stages: a forward and a backward part.
+Next, the process of finding $y$ consists in two stages: a forward and a backward part.
\begin{enumerate}
\item Forward construction of the trellis that depends on $\hat{H}$, on $x$, on $m$, and on $\rho$.
\item Backward determination of $y$ that minimizes $D$, starting with