X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/canny.git/blobdiff_plain/b04e55fbbcdcbca04098f1bf05ec4473e47b5c51..HEAD:/complexity.tex?ds=inline diff --git a/complexity.tex b/complexity.tex index d19f6e2..390190f 100644 --- a/complexity.tex +++ b/complexity.tex @@ -2,16 +2,16 @@ This section aims at justifying the lightweight attribute of our approach. To be more precise, we compare the complexity of our schemes to some of current state of the art of -steganographic scheme, namely HUGO~\cite{DBLP:conf/ih/PevnyFB10}, +steganographic schemes, namely HUGO~\cite{DBLP:conf/ih/PevnyFB10}, WOW~\cite{conf/wifs/HolubF12}, and UNIWARD~\cite{HFD14}. -Each of these scheme starts with the computation of the distortion cost +Each of these schemes starts with the computation of the distortion cost for each pixel switch and is later followed by the STC algorithm. Since this last step is shared by all, -we separately eavluate this complexity. -In all the rest of this section, we consider a $n \times n$ square image. +we separately evaluate this complexity. +In all the remainder of this section, we consider a $n \times n$ square image. First of all, HUGO starts with computing the second order SPAM Features. -This steps is in $\theta(n^2 + 2\times 343^2)$ due to the calculation +This steps is in $\theta(n^2 + 2\times 343^2)$ due to the computation of the difference arrays and next of the 686 features (of size 343). Next for each pixel, the distortion measure is calculated by +1/-1 modifying its value and computing again the SPAM @@ -23,8 +23,8 @@ which is at least in $\theta(343)$, and an overall distance between these metrics, which is in $\theta(686)$. Computing the distance is thus in $\theta(2\times 343^2)$ and this modification is thus in $\theta(2\times 343^2 \times n^2)$. -Ranking these results may be achieved with an insertion sort, which is in -$2 \times n^2 \ln(n)$. +Ranking these results may be achieved with a quick sort, which is in +$\theta(2 \times n^2 \ln(n))$ for data of size $n^2$. The overall complexity of the pixel selection is finally $\theta(n^2 +2 \times 343^2 + 2\times 343^2 \times n^2 + 2 \times n^2 \ln(n))$, \textit{i.e}, $\theta(2 \times n^2(343^2 + \ln(n)))$. @@ -47,16 +47,17 @@ To summarize, the complixity is in $\theta(6n^4\ln(n) +n^2)$ What follows details the complexity of the distortion evaluation of the UNIWARD scheme. This one is based to a convolution product $W$ of two elements -of size $n$ and is again in $\theta(n^2 \times n^2\ln(n^2))$ and a sum $D$ of +of size $n$ and is again in $\theta(n^2 \times n^2\ln(n^2))$, + and a sum $D$ of these $W$ which is in $\theta(n^2)$. This distortion computation step is thus in $\theta(6n^4\ln(n) + n^2)$. -Our edge selection is based on a Canny Filter. When applied on a +Our edge selection is based on a Canny filter. When applied on a $n \times n$ square image, the noise reduction step is in $\theta(5^3 n^2)$. -Next, let $T$ be the size of the canny mask. -Computing gradients is in $\theta(4Tn)$ since derivatives of each direction (vertical or horizontal) -are in $\theta(2Tn)$. +Next, let $T$ be the size of the Canny mask. +Computing gradients is in $\theta(4Tn^2)$ since derivatives of each direction (vertical or horizontal) +are in $\theta(2Tn^2)$. Finally, thresholding with hysteresis is in $\theta(n^2)$. The overall complexity is thus in $\theta((5^3+4T+1)n^2)$. @@ -71,18 +72,18 @@ matrix. Its complexity is thus negligible compared with the embedding map construction. The Fig.~\ref{fig:compared} -summarizes the complexity of the embedding map construction, for Hugo, Wow -and Uniward. It deals with square images +summarizes the complexity of the embedding map construction, for +WOW/UNIWARD, HUGO, and STABYLO. It deals with square images of size $n \times n$ when $n$ ranges from 512 to 4096. The $y$-coordinate is expressed in a logarithm scale. -It shows that the complexity of all algorithms +It shows that the complexity of all the algorithms is dramatically larger than the one of the STABYLO scheme. -Thanks to these complexity results, we claim that STABYLO is lightweight. +Thanks to these complexity results, we claim that our approach is lightweight. \begin{figure} \begin{center} \includegraphics[scale=0.4]{complexity} \end{center} -\caption{Complexity evaluation of Wow/Uniward, Hugo, and Stabylo} +\caption{Complexity evaluation of WOW/UNIWARD, HUGO, and STABYLO.} \label{fig:compared} \end{figure}