X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/16dcc.git/blobdiff_plain/d69591c41135e899d27072006db6af016df62445..30a7ec2b1746fb3abfe8780a43625bb768842228:/stopping.tex diff --git a/stopping.tex b/stopping.tex index 3a07e06..bb95663 100644 --- a/stopping.tex +++ b/stopping.tex @@ -1,11 +1,6 @@ - - - -Let thus be given such kind of map. -This article focuses on studying its iterations according to -the equation~(\ref{eq:asyn}) with a given strategy. -First of all, this can be interpreted as walking into its iteration graph -where the choice of the edge to follow is decided by the strategy. +This section considers functions $f: \Bool^{\mathsf{N}} \rightarrow \Bool^{\mathsf{N}} $ +issued from an hypercube where an Hamiltonian path has been removed +as described in the previous section. Notice that the iteration graph is always a subgraph of ${\mathsf{N}}$-cube augmented with all the self-loop, \textit{i.e.}, all the edges $(v,v)$ for any $v \in \Bool^{\mathsf{N}}$. @@ -15,7 +10,7 @@ interpreted as Markov chains. \begin{xpl} Let us consider for instance the graph $\Gamma(f)$ defined -in \textsc{Figure~\ref{fig:iteration:f*}.} and +in Figure~\ref{fig:iteration:f*} and the probability function $p$ defined on the set of edges as follows: $$ p(e) \left\{ @@ -38,55 +33,24 @@ P=\dfrac{1}{6} \left( 0&0&0&0&1&0&4&1 \\ 0&0&0&1&0&1&0&4 \end{array} -\right) +\right). \] \end{xpl} -% % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, -% % which is defined for two distributions $\pi$ and $\mu$ on the same set -% % $\Bool^n$ by: -% % $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$ -% % It is known that -% % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$ - -% % Let then $M(x,\cdot)$ be the -% % distribution induced by the $x$-th row of $M$. If the Markov chain -% % induced by -% % $M$ has a stationary distribution $\pi$, then we define -% % $$d(t)=\max_{x\in\Bool^n}\tv{M^t(x,\cdot)-\pi}.$$ -% Intuitively $d(t)$ is the largest deviation between -% the distribution $\pi$ and $M^t(x,\cdot)$, which -% is the result of iterating $t$ times the function. -% Finally, let $\varepsilon$ be a positive number, the \emph{mixing time} -% with respect to $\varepsilon$ is given by -% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ -% It defines the smallest iteration number -% that is sufficient to obtain a deviation lesser than $\varepsilon$. -% Notice that the upper and lower bounds of mixing times cannot -% directly be computed with eigenvalues formulae as expressed -% in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work -% only consider reversible Markov matrices whereas we do no restrict our -% matrices to such a form. - - - - - - - -This section considers functions $f: \Bool^{\mathsf{N}} \rightarrow \Bool^{\mathsf{N}} $ -issued from an hypercube where an Hamiltonian path has been removed. A specific random walk in this modified hypercube is first -introduced. We further detail -a theoretical study on the length of the path -which is sufficient to follow to get a uniform distribution. -Notice that for a general references on Markov chains -see~\cite{LevinPeresWilmer2006} -, and particularly Chapter~5 on stopping times. - +introduced (see Section~\ref{sub:stop:formal}). We further + study this random walk in a theoretical way to +provide an upper bound of fair sequences +(see Section~\ref{sub:stop:bound}). +We finally complete this study with experimental +results that reduce this bound (Sec.~\ref{sub:stop:exp}). +For a general reference on Markov chains, +see~\cite{LevinPeresWilmer2006}, +and particularly Chapter~5 on stopping times. +\subsection{Formalizing the Random Walk}\label{sub:stop:formal} First of all, let $\pi$, $\mu$ be two distributions on $\Bool^{\mathsf{N}}$. The total variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is @@ -96,14 +60,28 @@ $$\tv{\pi-\mu}=\frac{1}{2}\sum_{X\in\Bool^{\mathsf{N}}}|\pi(X)-\mu(X)|.$$ Moreov $\nu$ is a distribution on $\Bool^{\mathsf{N}}$, one has $$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$ -Let $P$ be the matrix of a Markov chain on $\Bool^{\mathsf{N}}$. $P(X,\cdot)$ is the -distribution induced by the $X$-th row of $P$. If the Markov chain induced by -$P$ has a stationary distribution $\pi$, then we define +Let $P$ be the matrix of a Markov chain on $\Bool^{\mathsf{N}}$. For any +$X\in \Bool^{\mathsf{N}}$, let $P(X,\cdot)$ be the distribution induced by the +${\rm bin}(X)$-th row of $P$, where ${\rm bin}(X)$ is the integer whose +binary encoding is $X$. If the Markov chain induced by $P$ has a stationary +distribution $\pi$, then we define $$d(t)=\max_{X\in\Bool^{\mathsf{N}}}\tv{P^t(X,\cdot)-\pi}.$$ +%\ANNOT{incohérence de notation $X$ : entier ou dans $B^N$ ?} and $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ + +%% Intuitively speaking, $t_{\rm mix}$ is a mixing time +%% \textit{i.e.}, is the time until the matrix $X$ of a Markov chain +%% is $\epsilon$-close to a stationary distribution. + +Intuitively speaking, $t_{\rm mix}(\varepsilon)$ is the time/steps required +to be sure to be $\varepsilon$-close to the stationary distribution, wherever +the chain starts. + + + One can prove that $$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$ @@ -112,13 +90,13 @@ $$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}( % It is known that $d(t+1)\leq d(t)$. \JFC{references ? Cela a-t-il -% un intérêt dans la preuve ensuite.} +% un intérêt dans la preuve ensuite.} %and % $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ -% One can prove that \JFc{Ou cela a-t-il été fait?} +% One can prove that \JFc{Ou cela a-t-il été fait?} % $$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$ @@ -140,11 +118,13 @@ randomized stopping time (possibly depending on the starting position $X$), such that the distribution of $X_\tau$ is $\pi$: $$\P_X(X_\tau=Y)=\pi(Y).$$ +\subsection{Upper bound of Stopping Time}\label{sub:stop:bound} + A stopping time $\tau$ is a {\emph strong stationary time} if $X_{\tau}$ is -independent of $\tau$. +independent of $\tau$. The following result will be useful~\cite[Proposition~6.10]{LevinPeresWilmer2006}, -\begin{thrm} +\begin{thrm}\label{thm-sst} If $\tau$ is a strong stationary time, then $d(t)\leq \max_{X\in\Bool^{\mathsf{N}}} \P_X(\tau > t)$. \end{thrm} @@ -157,8 +137,8 @@ In other words, $E$ is the set of all the edges in the classical ${\mathsf{N}}$-cube. Let $h$ be a function from $\Bool^{\mathsf{N}}$ into $\llbracket 1, {\mathsf{N}} \rrbracket$. Intuitively speaking $h$ aims at memorizing for each node -$X \in \Bool^{\mathsf{N}}$ which edge is removed in the Hamiltonian cycle, -\textit{i.e.} which bit in $\llbracket 1, {\mathsf{N}} \rrbracket$ +$X \in \Bool^{\mathsf{N}}$ whose edge is removed in the Hamiltonian cycle, +\textit{i.e.}, which bit in $\llbracket 1, {\mathsf{N}} \rrbracket$ cannot be switched. @@ -184,7 +164,7 @@ P_h(X,Y)=\frac{1}{2{\mathsf{N}}} & \textrm{if $X\neq Y$ and $(X,Y) \in E_h$} We denote by $\ov{h} : \Bool^{\mathsf{N}} \rightarrow \Bool^{\mathsf{N}}$ the function such that for any $X \in \Bool^{\mathsf{N}} $, $(X,\ov{h}(X)) \in E$ and $X\oplus\ov{h}(X)=0^{{\mathsf{N}}-h(X)}10^{h(X)-1}$. -The function $\ov{h}$ is said {\it square-free} if for every $X\in \Bool^{\mathsf{N}}$, +The function $\ov{h}$ is said to be {\it square-free} if for every $X\in \Bool^{\mathsf{N}}$, $\ov{h}(\ov{h}(X))\neq X$. \begin{lmm}\label{lm:h} @@ -222,16 +202,16 @@ $$ -%%%%%%%%%%%%%%%%%%%%%%%%%%%ù +%%%%%%%%%%%%%%%%%%%%%%%%%%%ù %\section{Stopping time} An integer $\ell\in \llbracket 1,{\mathsf{N}} \rrbracket$ is said {\it fair} at time $t$ if there exists $0\leq j t)\leq \frac{E[\tau]}{t}$. +With $t_n=32N^2+16N\ln (N+1)$, one obtains: $\P_X(\tau > t_n)\leq \frac{1}{4}$. +Therefore, using the definition of $t_{\rm mix}$ and +Theorem~\ref{thm-sst}, it follows that +$t_{\rm mix}(\frac{1}{4})\leq 32N^2+16N\ln (N+1)=O(N^2)$ and that + + + Notice that the calculus of the stationary time upper bound is obtained under the following constraint: for each vertex in the $\mathsf{N}$-cube there are one ongoing arc and one outgoing arc that are removed. -The calculus does not consider (balanced) Hamiltonian cycles, which +The calculus doesn't consider (balanced) Hamiltonian cycles, which are more regular and more binding than this constraint. -In this later context, we claim that the upper bound for the stopping time -should be reduced. +Moreover, the bound +is obtained using the coarse Markov Inequality. For the +classical (lazy) random walk the $\mathsf{N}$-cube, without removing any +Hamiltonian cycle, the mixing time is in $\Theta(N\ln N)$. +We conjecture that in our context, the mixing time is also in $\Theta(N\ln +N)$. + + +In this latter context, we claim that the upper bound for the stopping time +should be reduced. This fact is studied in the next section. + +\subsection{Practical Evaluation of Stopping Times}\label{sub:stop:exp} + +Let be given a function $f: \Bool^{\mathsf{N}} \rightarrow \Bool^{\mathsf{N}}$ +and an initial seed $x^0$. +The pseudo code given in Algorithm~\ref{algo:stop} returns the smallest +number of iterations such that all elements $\ell\in \llbracket 1,{\mathsf{N}} \rrbracket$ are fair. It allows to deduce an approximation of $E[\ts]$ +by calling this code many times with many instances of function and many +seeds. + +\begin{algorithm}[ht] +%\begin{scriptsize} +\KwIn{a function $f$, an initial configuration $x^0$ ($\mathsf{N}$ bits)} +\KwOut{a number of iterations $\textit{nbit}$} + +$\textit{nbit} \leftarrow 0$\; +$x\leftarrow x^0$\; +$\textit{fair}\leftarrow\emptyset$\; +\While{$\left\vert{\textit{fair}}\right\vert < \mathsf{N} $} +{ + $ s \leftarrow \textit{Random}(\mathsf{N})$ \; + $\textit{image} \leftarrow f(x) $\; + \If{$\textit{Random}(1) \neq 0$ and $x[s] \neq \textit{image}[s]$}{ + $\textit{fair} \leftarrow \textit{fair} \cup \{s\}$\; + $x[s] \leftarrow \textit{image}[s]$\; + } + $\textit{nbit} \leftarrow \textit{nbit}+1$\; +} +\Return{$\textit{nbit}$}\; +%\end{scriptsize} +\caption{Pseudo Code of stopping time computation} +\label{algo:stop} +\end{algorithm} + +Practically speaking, for each number $\mathsf{N}$, $ 3 \le \mathsf{N} \le 16$, +10 functions have been generated according to the method presented in Section~\ref{sec:hamilton}. For each of them, the calculus of the approximation of $E[\ts]$ +is executed 10000 times with a random seed. Figure~\ref{fig:stopping:moy} +summarizes these results. A circle represents the +approximation of $E[\ts]$ for a given $\mathsf{N}$. +The line is the graph of the function $x \mapsto 2x\ln(2x+8)$. +It can firstly +be observed that the approximation is largely +smaller than the upper bound given in Theorem~\ref{prop:stop}. +It can be further deduced that the conjecture of the previous section +is realistic according to the graph of $x \mapsto 2x\ln(2x+8)$. + + + + + +% \begin{table} +% $$ +% \begin{array}{|*{14}{l|}} +% \hline +% \mathsf{N} & 4 & 5 & 6 & 7& 8 & 9 & 10& 11 & 12 & 13 & 14 & 15 & 16 \\ +% \hline +% \mathsf{N} & 21.8 & 28.4 & 35.4 & 42.5 & 50 & 57.7 & 65.6& 73.5 & 81.6 & 90 & 98.3 & 107.1 & 115.7 \\ +% \hline +% \end{array} +% $$ +% \caption{Average Stopping Time}\label{table:stopping:moy} +% \end{table} + +\begin{figure} +\centering +\includegraphics[width=0.49\textwidth]{complexity} +\caption{Average Stopping Time Approximation}\label{fig:stopping:moy} +\end{figure} + + + +%%% Local Variables: +%%% mode: latex +%%% TeX-master: "main" +%%% ispell-dictionary: "american" +%%% mode: flyspell +%%% End: