X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/canny.git/blobdiff_plain/d67ddfb670a8d832dcf94f66044f3bb2ac8add39..df4d1d843c37342838b487163589f57a1ed6e28e:/complexity.tex?ds=inline diff --git a/complexity.tex b/complexity.tex index 6932afa..4499857 100644 --- a/complexity.tex +++ b/complexity.tex @@ -1,44 +1,84 @@ This section aims at justifying the lightweight attribute of our approach. -To be more precise, we compare the complexity of our schemes to the -state of the art steganography, namely HUGO~\cite{DBLP:conf/ih/PevnyFB10}. +To be more precise, we compare the complexity of our schemes to some of +current state of the art of +steganographic scheme, namely HUGO~\cite{DBLP:conf/ih/PevnyFB10}, +WOW~\cite{conf/wifs/HolubF12}, and UNIWARD~\cite{HFD14}. +Each of these scheme starts with the computation of the distortion cost +for each pixel switch and is later followed by the STC algorithm. +Since this last step is shared by all, we do not add its complexity. +In all the rest of this section, we consider a $n \times n$ square image. - -In what follows, we consider an $n \times n$ square image. First of all, HUGO starts with computing the second order SPAM Features. -This steps is in $O(n^2 + 2.343^2)$ due to the calculation +This steps is in $O(n^2 + 2\times 343^2)$ due to the calculation of the difference arrays and next of the 686 features (of size 343). Next for each pixel, the distortion measure is calculated by +1/-1 modifying its value and computing again the SPAM features. Pixels are thus selected according to their ability to provide -an image whose SPAM features are close to the original one. -The algorithm is thus computing a distance between each computed feature, -andthe original ones -which is at least in $O(343)$ and an overall distance between these -metrics which is in $O(686)$. Computing the distance is thus in +an image whose SPAM features are close to the original ones. +The algorithm thus computes a distance between each feature +and the original ones, +which is at least in $O(343)$, and an overall distance between these +metrics, which is in $O(686)$. Computing the distance is thus in $O(2\times 343^2)$ and this modification is thus in $O(2\times 343^2 \times n^2)$. -Ranking these results may be achieved with a insertion sort which is in +Ranking these results may be achieved with an insertion sort, which is in $2.n^2 \ln(n)$. -The overall complexity of the pixel selection is thus -$O(n^2 +2.343^2 + 2\times 343^2 \times n^2 + 2.n^2 \ln(n))$, \textit{i.e} +The overall complexity of the pixel selection is finally +$O(n^2 +2.343^2 + 2\times 343^2 \times n^2 + 2.n^2 \ln(n))$, \textit{i.e}, $O(2.n^2(343^2 + \ln(n)))$. -Our edge selection is based on a Canny Filter, -whose complexity is in $O(2n^2.\ln(n))$ thanks to the convolution step -which can be implemented with FFT. -To summarize, for the embedding map construction, the complexity of Hugo is -at least $343^2/\ln{n}$ times higher than -our scheme. For instance, for a squared image with 4M pixel per slide, -this part of our algorithm is more than 14100 faster than Hugo. -We are then left to express the complexity of the STC algorithm . + + +Let us focus now on WOW. +This scheme starts to compute the residual +of the cover as a convolution product which is in $O(n^2\ln(n^2))$. +The embedding suitability $\eta_{ij}$ is then computed for each pixel +$1 \le i,j \le n$ thanks to a convolution product again. +We thus have a complexity of $O(n^2 \times n^2\ln(n^2))$. +Moreover the suitability is computed for each wavelet level +detail (HH, HL, LL). +This distortion computation step is thus in $O(6n^4\ln(n))$. +Finally a norm of these three values is computed for each pixel +which adds to this complexity the complexity of $O(n^2)$. +To summarize, the complixity is in $O(6n^4\ln(n) +n^2)$ + +What follows details the complexity of the distortion evaluation of the +UNIWARD scheme. This one is based to a convolution product $W$ of two elements +of size $n$ and is again in $O(n^2 \times n^2\ln(n^2))$ and a sum $D$ of +these $W$ which is in $O(n^2)$. +This distortion computation step is thus in $O(6n^4\ln(n) + n^2)$. + + +Our edge selection is based on a Canny Filter. When applied on a +$n \times n$ square image, the noise reduction step is in $O(5^3 n^2)$. +Next, let $T$ be the size of the canny mask. +Computing gradients is in $O(4Tn)$ since derivatives of each direction (vertical or horizontal) +are in $O(2Tn)$. +Finally, thresholding with hysteresis is in $O(n^2)$. +The overall complexity is thus in $O((5^3+4T+1)n^2)$. + + + + + +We are then left to express the complexity of the STC algorithm. According to~\cite{DBLP:journals/tifs/FillerJF11}, it is in $O(2^h.n)$ where $h$ is the size of the duplicated -matrix. Its complexity is thus negligeable compared with the embedding map +matrix. Its complexity is thus negligible compared with the embedding map construction. -Thanks to these complexity result, we claim that STABYLO is lightweight. +To summarize, for the embedding map construction, the complexity of Hugo, WOW +and UNIWARD are dramatically larger than the one of our scheme: +STABYLO is in $O(n^2)$ +whereas HUGO is in $O(n^2\ln(n)$, and WOW and UNIWARD are in $O(n^4\ln(n))$. +Thanks to these complexity results, we claim that STABYLO is lightweight. + + + + +