+This research work takes place in the field of information hiding, considerably developed
+these last two decades. The proposed method for
+steganography considers digital images as covers.
+It belongs to the well-known large category
+of spatial least significant bits (LSBs) replacement schemes.
+Let us recall that, in this LSBR category, a subset of all the LSBs of the cover image is modified
+with a secret bit stream depending on: a secret key, the cover, and the message to embed.
+In this well-studied steganographic approach,
+if we consider that a LSB is the last bit of each pixel value,
+pixels with an even value (resp. an odd value)
+are never decreased (resp. increased),
+thus such schemes may break the
+structural symmetry of the host images.
+And these structural alterations can be detected by
+well-designed statistical investigations, leading to known steganalysis methods~\cite{DBLP:journals/tsp/DumitrescuWW03,DBLP:conf/mmsec/FridrichGD01,Dumitrescu:2005:LSB:1073170.1073176}.
-This work considers digital images as covers and foundation is
-spatial least significant-bit (LSB) replacement.
-I this data hiding scheme a subset of all the LSB of the cover image is modified
-with a secret bit stream depending on to a key, the cover, and the message to embed.
-This well studied steganographic approach never decreases (resp. increases)
-pixel with even value (resp. odd value) and may break structural symmetry.
-These structural modification can be detected by statistical approaches
-and thus by steganalysis methods~\cite{DBLP:journals/tsp/DumitrescuWW03,DBLP:conf/mmsec/FridrichGD01,Dumitrescu:2005:LSB:1073170.1073176}
-
-This drawback is avoided in LSB matching (LSBM) where
+Let us recall too that this drawback
+can be corrected considering the LSB matching (LSBM) subcategory, in which
the $+1$ or $-1$ is randomly added to the cover pixel LSB value
-only if this one does not match the secret bit.
-Since probabilities of increasing or decreasing pixel value are the same, statistical approaches
-cannot be applied there to discover stego-images in LSBM.
-The most accurate detectors for this matching are universal steganalysers such as~\cite{LHS08,DBLP:conf/ih/2005,FK12}
-which classify images thanks to extracted features from neighboring elements of noise residual.
-
-LSB matching revisited (LSBMR)~\cite{Mielikainen06} have been recently introduced.
-This scheme deals with pairs of pixels instead of individual ones.
-It thus allows to decrease the number of modified bits per cover pixel
-for the same payload compared to LSB replacement and LSBM and
-and avoids the LSB replacement style asymmetry. Unfortunately,
-detectors referenced above are able to distinguish between
-stego content images and cover images.
+only if this one does not correspond to the secret bit.
+%TODO : modifier ceci
+By considering well-encrypted hidden messages, probabilities of increasing or of decreasing value of pixels are equal. Then usual statistical approaches
+cannot be applied here to discover stego-contents in LSBM.
+The most accurate detectors for this matching are universal steganalysers such as~\cite{LS08,DBLP:conf/ih/Ker05,FK12},
+which classify images according to extracted features from neighboring elements of residual noise.
+
+
+Finally, LSB matching revisited (LSBMR) has recently been introduced in~\cite{Mielikainen06}.
+It works as follows: for a given pair of pixels, the LSB
+of the first pixel carries a first bit of the secret message, while the parity relationship
+(odd/even combination) of the two pixel values carries
+a second bit of the message.
+By doing so, the modification
+rate of pixels can decrease from 0.5 to 0.375 bits/pixel
+(bpp) in the case of a maximum embedding rate, meaning fewer
+changes in the cover image at the same payload compared to both
+LSBR and LSBM. It is also shown in~\cite{Mielikainen06} that such a new
+scheme can avoid the LSB replacement style asymmetry, and
+thus it should make the detection slightly more difficult than in the
+LSBM approach. % based on our experiments
+
+
+
+
Instead of (efficiently) modifying LSBs, there is also a need to select pixels whose value
modification minimizes a distortion function.
-This distortion may be computed thanks to feature vectors that are embedded for instance in steganalysers
-given above.
+This distortion may be computed thanks to feature vectors that are embedded for instance in the steganalysers
+referenced above.
Highly Undetectable steGO (HUGO) method~\cite{DBLP:conf/ih/PevnyFB10} is one of the most efficient instance of such a scheme.
-It takes into account SPAM features (whose size is larger than $10^7$) to avoid overfitting a particular
-steganalyser. Thus a distortion measure for each pixel is individually determined as the sum of differences between
-features of SPAM computed from the cover and from the stego images.
-Thanks to this feature set, HUGO allows to embed $7\times$ longer messages with the same level of
-indetectability than LSB matching.
+It takes into account so-called SPAM features
+%(whose size is larger than $10^7$)
+to avoid overfitting a particular
+steganalyser. Thus a distortion measure for each pixel is individually determined as the sum of the differences between
+the features of the SPAM computed from the cover and from the stego images.
+Thanks to this features set, HUGO allows to embed messages that are $7\times$ longer than the former ones with the same level of
+indetectability as LSB matching.
However, this improvement is time consuming, mainly due to the distortion function
-computation.
+computation.
+
There remains a large place between random selection of LSB and feature based modification of pixel values.
We argue that modifying edge pixels is an acceptable compromise.
-Edges form the outline of an object: they are the boundary between overlapping objects or between an object
-and the background. A small modification of pixel value in the stego image should not be harmful to the image quality:
-in cover image, edge pixels already break its continuity and thus already contains large variation with neighbouring
-pixels. In other words, minor changes in regular area is more dramatic than larger modifications in edge ones.
-Our proposal is thus to embed message bits into edge shapes while preserving other smooth regions.
-
-Edge based steganographic schemes have bee already studied~\cite{Luo:2010:EAI:1824719.1824720,DBLP:journals/eswa/ChenCL10}.
-In the former, the authors show how to select sharper edge regions with respect
-to embedding rate: the larger the number of bits to be embedded, the coarse the edge regions are.
-Then the data hiding algorithm is achieved by applying LSBMR on pixels of this region.
-The authors show that this method is more efficient than all the LSB, LSBM, LSBMR approaches
+Edges form the outline of an object: they are the boundaries between overlapping objects or between an object
+and its background. When producing the stego-image, a small modification of some pixel values in such edges should not impact the image quality, which is a requirement when
+attempting to be undetectable. Indeed,
+in a cover image, edges already break the continuity of pixels' intensity map and thus already present large variations with their neighboring
+pixels. In other words, minor changes in regular areas are more dramatic than larger modifications in edge ones.
+Our first proposal is thus to embed message bits into edge shapes while preserving other smooth regions.
+
+Edge based steganographic schemes have already been studied,
+the most interesting
+approaches being detailed in~\cite{Luo:2010:EAI:1824719.1824720} and
+in~\cite{DBLP:journals/eswa/ChenCL10}.
+In the former, the authors present the Edge Adaptive
+Image Steganography based on LSB matching revisited further denoted as
+EAISLSBMR. This approach selects sharper edge
+ regions with respect
+to a given embedding rate: the larger the number of bits to be embedded, the coarser
+the edge regions are.
+Then the data hiding algorithm is achieved by applying LSBMR on some of the pixels of these regions.
+The authors show that their proposed method is more efficient than all the LSB, LSBM, and LSBMR approaches
thanks to extensive experiments.
-However, it has been shown that the distinguish error with LSB embedding is fewer than the one with some binary embedding~\cite{DBLP:journals/tifs/FillerJF11}.
-We thus propose to take benefit of these optimized embedding, provided it is not too time consuming.
-Experiments have confirmed such a fact\JFC{Raphael....}.
-
+However, it has been shown that the distinguishing error with LSB embedding is lower than
+the one with some binary embedding~\cite{DBLP:journals/tifs/FillerJF11}.
+We thus propose to take advantage of these optimized embeddings, provided they are not too time consuming.
+In the latter, an hybrid edge detector is presented followed by an ad hoc
+embedding.
+The Edge detection is computed by combining fuzzy logic~\cite{Tyan1993}
+and Canny~\cite{Canny:1986:CAE:11274.11275} approaches. The goal of this combination
+is to enlarge the set of modified bits to increase the payload of the data hiding scheme.
+
+
+One can notice that all the previously referenced
+schemes~\cite{Luo:2010:EAI:1824719.1824720,DBLP:journals/eswa/ChenCL10,DBLP:conf/ih/PevnyFB10}
+produce stego contents
+by only considering the payload, not the type of image signal: the higher the payload is,
+the better the approach is said to be.
+Contrarily, we argue that some images should not be taken as a cover because of the nature of their signals.
+Consider for instance a uniformly black image: a very tiny modification of its pixels can be easily detectable.
+The approach we propose is thus to provide a self adaptive algorithm with a high payload, which depends on the cover signal.
+% Message extraction is achieved by computing the same
+% edge detection pixels set for the cover and the stego image.
+% The edge detection algorithm is thus not applied on all the bits of the image,
+% but to exclude the LSBs which are modified.
+
+Finally, even if the steganalysis discipline
+ has known great innovations these last years, it is currently impossible to prove rigorously
+that a given hidden message cannot be recovered by an attacker.
+This is why we add to our scheme a reasonable
+message encryption stage, to be certain that,
+even in the worst case scenario, the attacker
+will not be able to obtain the original message content.
+Doing so makes our steganographic protocol, to a certain extend, an asymmetric one.
-\JFC{Christophe : énoncer la problématique du besoin de crypto et de ``cryptographiquement sûr'', les algo déjà cassés....
-l'efficacité d'un encodage/décodage ...}
-To deal with security issues, message is encrypted
+To sum up, in this research work, well-studied and experimented
+techniques of signal processing (adaptive edges detection),
+coding theory (syndrome-trellis codes), and cryptography
+(Blum-Goldwasser encryption protocol) are combined
+to compute an efficient steganographic
+scheme, whose principal characteristic is to take into
+consideration the cover image and to be compatible with small computation resources.
-In this paper, we thus propose to combine tried and tested techniques of signal theory (the adaptive edge detection), coding (the binary embedding), and cryptography
-(the encrypt the message) to compute an efficient steganography scheme that is amenable to be executed on small devices.
+The remainder of this document is organized as follows.
+Section~\ref{sec:ourapproach} presents the details of the proposed steganographic scheme and applies it on a running example. Among its technical description,
+its adaptive aspect is emphasized.
+Section~\ref{sub:complexity} presents the overall complexity of our approach
+and compare it to the HUGO's one.
+Section~\ref{sec:experiments} shows experiments on image quality, steganalysis evaluation, and compare them to the state of the art steganographic schemes.
+Finally, concluding notes and future work are given in Section~\ref{sec:concl}.
-The rest of the paper is organised as follows.
-Section~\ref{sec:ourapproach} presents the details of our steganographic scheme.
-Section~\ref{sec:experiments} shows experiments on image quality, steganalytic evaluation, complexity of our approach
-and compare them to state of the art steganographic schemes.
-Finally, concluding notes and future works are given in section~\ref{sec:concl}
-theory : ?