X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/canny.git/blobdiff_plain/bf273ca48657e1dd9e23cb0e5afe4b41838e3f2a..fdec47fad10cd693dc899d18ba02c166c1f5349f:/complexity.tex?ds=sidebyside diff --git a/complexity.tex b/complexity.tex index e4e6b0e..a5904a7 100644 --- a/complexity.tex +++ b/complexity.tex @@ -6,11 +6,12 @@ steganographic scheme, namely HUGO~\cite{DBLP:conf/ih/PevnyFB10}, WOW~\cite{conf/wifs/HolubF12}, and UNIWARD~\cite{HFD14}. Each of these scheme starts with the computation of the distortion cost for each pixel switch and is later followed by the STC algorithm. -Since this last step is shared by all, we do not add its complexity. +Since this last step is shared by all, +we separately evaluate this complexity. In all the rest of this section, we consider a $n \times n$ square image. First of all, HUGO starts with computing the second order SPAM Features. -This steps is in $O(n^2 + 2\times 343^2)$ due to the calculation +This steps is in $\theta(n^2 + 2\times 343^2)$ due to the calculation of the difference arrays and next of the 686 features (of size 343). Next for each pixel, the distortion measure is calculated by +1/-1 modifying its value and computing again the SPAM @@ -18,62 +19,76 @@ features. Pixels are thus selected according to their ability to provide an image whose SPAM features are close to the original ones. The algorithm thus computes a distance between each feature and the original ones, -which is at least in $O(343)$, and an overall distance between these -metrics, which is in $O(686)$. Computing the distance is thus in -$O(2\times 343^2)$ and this modification -is thus in $O(2\times 343^2 \times n^2)$. +which is at least in $\theta(343)$, and an overall distance between these +metrics, which is in $\theta(686)$. Computing the distance is thus in +$\theta(2\times 343^2)$ and this modification +is thus in $\theta(2\times 343^2 \times n^2)$. Ranking these results may be achieved with an insertion sort, which is in -$2.n^2 \ln(n)$. +$2 \times n^2 \ln(n)$. The overall complexity of the pixel selection is finally -$O(n^2 +2.343^2 + 2\times 343^2 \times n^2 + 2.n^2 \ln(n))$, \textit{i.e}, -$O(2.n^2(343^2 + \ln(n)))$. +$\theta(n^2 +2 \times 343^2 + 2\times 343^2 \times n^2 + 2 \times n^2 \ln(n))$, \textit{i.e}, +$\theta(2 \times n^2(343^2 + \ln(n)))$. Let us focus now on WOW. This scheme starts to compute the residual -of the cover as a convolution product which is in $O(n^2\ln(n^2))$. +of the cover as a convolution product which is in $\theta(n^2\ln(n^2))$. The embedding suitability $\eta_{ij}$ is then computed for each pixel $1 \le i,j \le n$ thanks to a convolution product again. -We thus have a complexity of $O(n^2 \times n^2\ln(n^2))$. +We thus have a complexity of $\theta(n^2 \times n^2\ln(n^2))$. Moreover the suitability is computed for each wavelet level detail (HH, HL, LL). -This distortion computation step is thus in $O(6n^4\ln(n))$. +This distortion computation step is thus in $\theta(6n^4\ln(n))$. Finally a norm of these three values is computed for each pixel -which adds to this complexity the complexity of $O(n^2)$. -To summarize, the complixity is in $O(6n^4\ln(n) +n^2)$ +which adds to this complexity the complexity of $\theta(n^2)$. +To summarize, the complixity is in $\theta(6n^4\ln(n) +n^2)$ What follows details the complexity of the distortion evaluation of the UNIWARD scheme. This one is based to a convolution product $W$ of two elements -of size $n$ and is again in $O(n^2 \times n^2\ln(n^2))$ and a sum $D$ of -these $W$ which is in $O(n^2)$. -This distortion computation step is thus in $O(6n^4\ln(n) + n^2)$. +of size $n$ and is again in $\theta(n^2 \times n^2\ln(n^2))$ and a sum $D$ of +these $W$ which is in $\theta(n^2)$. +This distortion computation step is thus in $\theta(6n^4\ln(n) + n^2)$. Our edge selection is based on a Canny Filter. When applied on a -$n \times n$ square image, the noise reduction step is in $O(5^3 n^2)$. +$n \times n$ square image, the noise reduction step is in $\theta(5^3 n^2)$. Next, let $T$ be the size of the canny mask. -Computing gradients is in $O(4Tn)$ since derivatives of each direction (vertical or horizontal) -are in $O(2Tn)$. -Finally, thresholding with hysteresis is in $O(n^2)$. -The overall complexity is thus in $O((5^3+4T+1)n^2)$. -To summarize, for the embedding map construction, the complexity of Hugo is -dramatically larger than our scheme. +Computing gradients is in $\theta(4Tn^2)$ since derivatives of each direction (vertical or horizontal) +are in $\theta(2Tn^2)$. +Finally, thresholding with hysteresis is in $\theta(n^2)$. +The overall complexity is thus in $\theta((5^3+4T+1)n^2)$. + + + + We are then left to express the complexity of the STC algorithm. According to~\cite{DBLP:journals/tifs/FillerJF11}, it is -in $O(2^h.n)$ where $h$ is the size of the duplicated +in $\theta(2^h.n)$ where $h$ is the size of the duplicated matrix. Its complexity is thus negligible compared with the embedding map construction. +The Fig.~\ref{fig:compared} +summarizes the complexity of the embedding map construction, for +WOW/UNIWARD, HUGO, and STABYLO. It deals with square images +of size $n \times n$ when $n$ ranges from +512 to 4096. The $y$-coordinate is expressed in a logarithm scale. +It shows that the complexity of all the algorithms +is dramatically larger than the one of the STABYLO scheme. +Thanks to these complexity results, we claim that our approach is lightweight. +\begin{figure} +\begin{center} +\includegraphics[scale=0.4]{complexity} +\end{center} +\caption{Complexity evaluation of WOW/UNIWARD, HUGO, and STABYLO} +\label{fig:compared} +\end{figure} -Thanks to these complexity results, we claim that STABYLO is lightweight. - -