From: couchot Date: Tue, 5 Feb 2013 07:56:08 +0000 (+0100) Subject: apres relecture Christophe X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/canny.git/commitdiff_plain/3cb99fa4936f62fef8a0f24880e7d9bca9c31a9e?ds=sidebyside;hp=--cc apres relecture Christophe --- 3cb99fa4936f62fef8a0f24880e7d9bca9c31a9e diff --git a/intro.tex b/intro.tex index b17515c..3ccf316 100644 --- a/intro.tex +++ b/intro.tex @@ -1,7 +1,8 @@ This research work takes place into the field of information hiding, considerably developed these last two decades. The proposed method for -steganography considers digital images as covers, it belongs in the well investigated large category +steganography considers digital images as covers. +It belongs in the well investigated large category of spatial least significant bits (LSBs) replacement schemes. Let us recall that, in this LSBR category, a subset of all the LSBs of the cover image is modified with a secret bit stream depending on: a secret key, the cover, and the message to embed. @@ -45,7 +46,9 @@ modification minimizes a distortion function. This distortion may be computed thanks to feature vectors that are embedded for instance in steganalysers referenced above. Highly Undetectable steGO (HUGO) method~\cite{DBLP:conf/ih/PevnyFB10} is one of the most efficient instance of such a scheme. -It takes into account so-called SPAM features (whose size is larger than $10^7$) to avoid overfitting a particular +It takes into account so-called SPAM features +%(whose size is larger than $10^7$) +to avoid overfitting a particular steganalyser. Thus a distortion measure for each pixel is individually determined as the sum of differences between features of SPAM computed from the cover and from the stego images. Thanks to this features set, HUGO allows to embed $7\times$ longer messages with the same level of diff --git a/main.tex b/main.tex index fcf92c3..0d0e95b 100755 --- a/main.tex +++ b/main.tex @@ -1,4 +1,4 @@ -\documentclass[journal]{IEEEtran} +\documentclass{acm_proc_article-sp} \usepackage{subfig} \usepackage{color} \usepackage{graphicx} @@ -23,18 +23,19 @@ edge-based steganographic approach} \{jean-francois.couchot, raphael.couturier, christophe.guyeux\}@femto-st.fr\\ $*:$ Authors in alphabetic order.\\ } -\newcommand{\JFC}[1]{\begin{color}{green}\textit{#1}\end{color}} -\newcommand{\RC}[1]{\begin{color}{red}\textit{#1}\end{color}} -\newcommand{\CG}[1]{\begin{color}{blue}\textit{#1}\end{color}} +\newcommand{\JFC}[1]{\begin{color}{green}\textit{}\end{color}} +\newcommand{\RC}[1]{\begin{color}{red}\textit{}\end{color}} +\newcommand{\CG}[1]{\begin{color}{blue}\textit{}\end{color}} % make the title area \maketitle -\begin{IEEEkeywords} + %IEEEtran, journal, \LaTeX, paper, template. -Steganography, least-significant-bit (LSB)-based steganography, edge detection, Canny filter, security, syndrome treillis code. -\end{IEEEkeywords} +\keywords{Steganography, least-significant-bit (LSB)-based steganography, edge detection, Canny filter, security, syndrome treillis code} + +\maketitle \begin{abstract} A novel steganographic method called STABYLO is introduced in this research work. @@ -51,7 +52,7 @@ a scheme that can reasonably face up-to-date steganalysers. -\IEEEpeerreviewmaketitle + @@ -69,7 +70,7 @@ a scheme that can reasonably face up-to-date steganalysers. \section{Conclusion}\label{sec:concl} The STABYLO algorithm, whose acronym means STeganography -with Canny, Bbs, binarY embedding at LOw cost, has been introduced +with cAnny, Bbs, binarY embedding at LOw cost, has been introduced in this document as an efficient method having comparable, though somewhat smaller, security than the well known Highly Undetectable steGO (HUGO) steganographic scheme. diff --git a/ourapproach.tex b/ourapproach.tex index 02d8735..3319b6a 100644 --- a/ourapproach.tex +++ b/ourapproach.tex @@ -1,5 +1,5 @@ The flowcharts given in Fig.~\ref{fig:sch} summarize our steganography scheme denoted by -STABYLO, which stands for STeganography with Canny, Bbs, binarY embedding at LOw cost. +STABYLO, which stands for STeganography with cAnny, Bbs, binarY embedding at LOw cost. What follows successively details all the inner steps and flows inside both the embedding stage (Fig.~\ref{fig:sch:emb}) and the extraction one (Fig.~\ref{fig:sch:ext}). @@ -169,7 +169,8 @@ polynomial time. \subsection{Data Extraction} Message extraction summarized in Fig.~\ref{fig:sch:ext} follows data embedding since there exists a reverse function for all its steps. -First of all, the same edge detection is applied (on the 7 first bits) to get set, +First of all, the same edge detection is applied (on the 7 first bits) to +get the set of LSBs, which is sufficiently large with respect to the message size given as a key. Then the STC reverse algorithm is applied to retrieve the encrypted message. Finally, the Blum-Goldwasser decryption function is executed and the original diff --git a/stc.tex b/stc.tex index a544278..f87104b 100644 --- a/stc.tex +++ b/stc.tex @@ -1,3 +1,5 @@ +To make this article self-contained, this section recalls +basis of the Syndrome Treillis Codes (STC). Let $x=(x_1,\ldots,x_n)$ be the $n$-bits cover vector of the image $X$, $m$ be the message to embed, and @@ -60,7 +62,7 @@ $2^n-1$ pixels needs $1-1/2^n$ average changes. Unfortunately, for any given $H$, finding $y$ that solves $Hy=m$ and that minimizes $D_X(x,y)$, has an exponential complexity with respect to $n$. -The Syndrome-Trellis Codes (STC) +The Syndrome-Trellis Codes presented by Filler \emph{et al.} in~\cite{DBLP:conf/mediaforensics/FillerJF10} is a practical solution to this complexity. Thanks to this contribution, the solving algorithm has a linear complexity with respect to $n$.