1 This section considers functions $f: \Bool^{\mathsf{N}} \rightarrow \Bool^{\mathsf{N}} $
2 issued from an hypercube where an Hamiltonian path has been removed
3 as described in previous section.
4 Notice that the iteration graph is always a subgraph of
5 ${\mathsf{N}}$-cube augmented with all the self-loop, \textit{i.e.}, all the
6 edges $(v,v)$ for any $v \in \Bool^{\mathsf{N}}$.
7 Next, if we add probabilities on the transition graph, iterations can be
8 interpreted as Markov chains.
11 Let us consider for instance
12 the graph $\Gamma(f)$ defined
13 in \textsc{Figure~\ref{fig:iteration:f*}.} and
14 the probability function $p$ defined on the set of edges as follows:
18 = \frac{2}{3} \textrm{ if $e=(v,v)$ with $v \in \Bool^3$,}\\
19 = \frac{1}{6} \textrm{ otherwise.}
23 The matrix $P$ of the Markov chain associated to the function $f^*$ and to its probability function $p$ is
26 \begin{array}{llllllll}
41 A specific random walk in this modified hypercube is first
42 introduced (See section~\ref{sub:stop:formal}). We further
43 study this random walk in a theoretical way to
44 provide an upper bound of fair sequences
45 (See section~\ref{sub:stop:bound}).
46 We finally complete these study with experimental
47 results that reduce this bound (Sec.~\ref{sub:stop:exp}).
48 Notice that for a general references on Markov chains
49 see~\cite{LevinPeresWilmer2006},
50 and particularly Chapter~5 on stopping times.
53 \subsection{Formalizing the Random Walk}\label{sub:stop:formal}
55 First of all, let $\pi$, $\mu$ be two distributions on $\Bool^{\mathsf{N}}$. The total
56 variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is
58 $$\tv{\pi-\mu}=\max_{A\subset \Bool^{\mathsf{N}}} |\pi(A)-\mu(A)|.$$ It is known that
59 $$\tv{\pi-\mu}=\frac{1}{2}\sum_{X\in\Bool^{\mathsf{N}}}|\pi(X)-\mu(X)|.$$ Moreover, if
60 $\nu$ is a distribution on $\Bool^{\mathsf{N}}$, one has
61 $$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$
63 Let $P$ be the matrix of a Markov chain on $\Bool^{\mathsf{N}}$. $P(X,\cdot)$ is the
64 distribution induced by the $X$-th row of $P$. If the Markov chain induced by
65 $P$ has a stationary distribution $\pi$, then we define
66 $$d(t)=\max_{X\in\Bool^{\mathsf{N}}}\tv{P^t(X,\cdot)-\pi}.$$
70 $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
72 %% Intuitively speaking, $t_{\rm mix}$ is a mixing time
73 %% \textit{i.e.}, is the time until the matrix $X$ of a Markov chain
74 %% is $\epsilon$-close to a stationary distribution.
76 Intutively speaking, $t_{\rm mix}(\varepsilon)$ is the time/steps required
77 to be sure to be $\varepsilon$-close to the staionary distribution, wherever
84 $$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$
89 % It is known that $d(t+1)\leq d(t)$. \JFC{references ? Cela a-t-il
90 % un intérêt dans la preuve ensuite.}
95 % $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
96 % One can prove that \JFc{Ou cela a-t-il été fait?}
97 % $$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$
101 Let $(X_t)_{t\in \mathbb{N}}$ be a sequence of $\Bool^{\mathsf{N}}$ valued random
102 variables. A $\mathbb{N}$-valued random variable $\tau$ is a {\it stopping
103 time} for the sequence $(X_i)$ if for each $t$ there exists $B_t\subseteq
104 (\Bool^{\mathsf{N}})^{t+1}$ such that $\{\tau=t\}=\{(X_0,X_1,\ldots,X_t)\in B_t\}$.
105 In other words, the event $\{\tau = t \}$ only depends on the values of
106 $(X_0,X_1,\ldots,X_t)$, not on $X_k$ with $k > t$.
109 Let $(X_t)_{t\in \mathbb{N}}$ be a Markov chain and $f(X_{t-1},Z_t)$ a
110 random mapping representation of the Markov chain. A {\it randomized
111 stopping time} for the Markov chain is a stopping time for
112 $(Z_t)_{t\in\mathbb{N}}$. If the Markov chain is irreducible and has $\pi$
113 as stationary distribution, then a {\it stationary time} $\tau$ is a
114 randomized stopping time (possibly depending on the starting position $X$),
115 such that the distribution of $X_\tau$ is $\pi$:
116 $$\P_X(X_\tau=Y)=\pi(Y).$$
118 \subsection{Upper bound of Stopping Time}\label{sub:stop:bound}
121 A stopping time $\tau$ is a {\emph strong stationary time} if $X_{\tau}$ is
122 independent of $\tau$. The following result will be useful~\cite[Proposition~6.10]{LevinPeresWilmer2006},
125 \begin{thrm}\label{thm-sst}
126 If $\tau$ is a strong stationary time, then $d(t)\leq \max_{X\in\Bool^{\mathsf{N}}}
131 %Let $\Bool^n$ be the set of words of length $n$.
133 X\in \Bool^{\mathsf{N}}, Y\in \Bool^{\mathsf{N}},\ X=Y \text{ or } X\oplus Y \in 0^*10^*\}$.
134 In other words, $E$ is the set of all the edges in the classical
136 Let $h$ be a function from $\Bool^{\mathsf{N}}$ into $\llbracket 1, {\mathsf{N}} \rrbracket$.
137 Intuitively speaking $h$ aims at memorizing for each node
138 $X \in \Bool^{\mathsf{N}}$ which edge is removed in the Hamiltonian cycle,
139 \textit{i.e.} which bit in $\llbracket 1, {\mathsf{N}} \rrbracket$
144 We denote by $E_h$ the set $E\setminus\{(X,Y)\mid X\oplus Y =
145 0^{{\mathsf{N}}-h(X)}10^{h(X)-1}\}$. This is the set of the modified hypercube,
146 \textit{i.e.}, the ${\mathsf{N}}$-cube where the Hamiltonian cycle $h$
149 We define the Markov matrix $P_h$ for each line $X$ and
150 each column $Y$ as follows:
154 P_h(X,X)=\frac{1}{2}+\frac{1}{2{\mathsf{N}}} & \\
155 P_h(X,Y)=0 & \textrm{if $(X,Y)\notin E_h$}\\
156 P_h(X,Y)=\frac{1}{2{\mathsf{N}}} & \textrm{if $X\neq Y$ and $(X,Y) \in E_h$}
159 \label{eq:Markov:rairo}
162 We denote by $\ov{h} : \Bool^{\mathsf{N}} \rightarrow \Bool^{\mathsf{N}}$ the function
163 such that for any $X \in \Bool^{\mathsf{N}} $,
164 $(X,\ov{h}(X)) \in E$ and $X\oplus\ov{h}(X)=0^{{\mathsf{N}}-h(X)}10^{h(X)-1}$.
165 The function $\ov{h}$ is said {\it square-free} if for every $X\in \Bool^{\mathsf{N}}$,
166 $\ov{h}(\ov{h}(X))\neq X$.
168 \begin{lmm}\label{lm:h}
169 If $\ov{h}$ is bijective and square-free, then $h(\ov{h}^{-1}(X))\neq h(X)$.
173 Let $\ov{h}$ be bijective.
174 Let $k\in \llbracket 1, {\mathsf{N}} \rrbracket$ s.t. $h(\ov{h}^{-1}(X))=k$.
175 Then $(\ov{h}^{-1}(X),X)$ belongs to $E$ and
176 $\ov{h}^{-1}(X)\oplus X = 0^{{\mathsf{N}}-k}10^{k-1}$.
177 Let us suppose $h(X) = h(\ov{h}^{-1}(X))$. In such a case, $h(X) =k$.
178 By definition of $\ov{h}$, $(X, \ov{h}(X)) \in E $ and
179 $X\oplus\ov{h}(X)=0^{{\mathsf{N}}-h(X)}10^{h(X)-1} = 0^{{\mathsf{N}}-k}10^{k-1}$.
180 Thus $\ov{h}(X)= \ov{h}^{-1}(X)$, which leads to $\ov{h}(\ov{h}(X))= X$.
181 This contradicts the square-freeness of $\ov{h}$.
184 Let $Z$ be a random variable that is uniformly distributed over
185 $\llbracket 1, {\mathsf{N}} \rrbracket \times \Bool$.
186 For $X\in \Bool^{\mathsf{N}}$, we
187 define, with $Z=(i,b)$,
191 f(X,Z)=X\oplus (0^{{\mathsf{N}}-i}10^{i-1}) & \text{if } b=1 \text{ and } i\neq h(X),\\
192 f(X,Z)=X& \text{otherwise.}
196 The Markov chain is thus defined as
203 %%%%%%%%%%%%%%%%%%%%%%%%%%%ù
204 %\section{Stopping time}
206 An integer $\ell\in \llbracket 1,{\mathsf{N}} \rrbracket$ is said {\it fair}
208 exists $0\leq j <t$ such that $Z_{j+1}=(\ell,\cdot)$ and $h(X_j)\neq \ell$.
209 In other words, there exist a date $j$ before $t$ where
210 the first element of the random variable $Z$ is exactly $l$
211 (\textit{i.e.}, $l$ is the strategy at date $j$)
212 and where the configuration $X_j$ allows to traverse the edge $l$.
214 Let $\ts$ be the first time all the elements of $\llbracket 1, {\mathsf{N}} \rrbracket$
215 are fair. The integer $\ts$ is a randomized stopping time for
216 the Markov chain $(X_t)$.
220 The integer $\ts$ is a strong stationary time.
224 Let $\tau_\ell$ be the first time that $\ell$ is fair. The random variable
225 $Z_{\tau_\ell}$ is of the form $(\ell,b)$ %with $\delta\in\{0,1\}$ and
227 $b=1$ with probability $\frac{1}{2}$ and $b=0$ with probability
228 $\frac{1}{2}$. Since $h(X_{\tau_\ell-1})\neq\ell$ the value of the $\ell$-th
229 bit of $X_{\tau_\ell}$
230 is $0$ or $1$ with the same probability ($\frac{1}{2}$).
231 This probability is independent of the value of the other bits.
235 Moving next in the chain, at each step,
236 the $l$-th bit is switched from $0$ to $1$ or from $1$ to $0$ each time with
237 the same probability. Therefore, for $t\geq \tau_\ell$, the
238 $\ell$-th bit of $X_t$ is $0$ or $1$ with the same probability, and
239 independently of the value of the other bits, proving the
242 \begin{thrm} \label{prop:stop}
243 If $\ov{h}$ is bijective and square-free, then
244 $E[\ts]\leq 8{\mathsf{N}}^2+ 4{\mathsf{N}}\ln ({\mathsf{N}}+1)$.
247 For each $X\in \Bool^{\mathsf{N}}$ and $\ell\in\llbracket 1,{\mathsf{N}}\rrbracket$,
248 let $S_{X,\ell}$ be the
249 random variable that counts the number of steps
250 from $X$ until we reach a configuration where
251 $\ell$ is fair. More formally
252 $$S_{X,\ell}=\min \{t \geq 1\mid h(X_{t-1})\neq \ell\text{ and }Z_t=(\ell,.)\text{ and } X_0=X\}.$$
255 % $$\lambda_h=\max_{X,\ell} S_{X,\ell}.$$
258 \begin{lmm}\label{prop:lambda}
259 Let $\ov{h}$ is a square-free bijective function. Then
263 $E[S_{X,\ell}]\leq 8{\mathsf{N}}^2$ is established.
267 For every $X$, every $\ell$, one has $\P(S_{X,\ell})\leq 2)\geq
268 \frac{1}{4{\mathsf{N}}^2}$.
272 \item if $h(X)\neq \ell$, then
273 $\P(S_{X,\ell}=1)=\frac{1}{2{\mathsf{N}}}\geq \frac{1}{4{\mathsf{N}}^2}$.
274 \item otherwise, $h(X)=\ell$, then
275 $\P(S_{X,\ell}=1)=0$.
276 But in this case, intuitively, it is possible to move
277 from $X$ to $\ov{h}^{-1}(X)$ (with probability $\frac{1}{2N}$). And in
278 $\ov{h}^{-1}(X)$ the $l$-th bit can be switched.
280 since $\ov{h}$ is square-free,
281 $\ov{h}(X)=\ov{h}(\ov{h}(\ov{h}^{-1}(X)))\neq \ov{h}^{-1}(X)$. It follows
282 that $(X,\ov{h}^{-1}(X))\in E_h$. We thus have
283 $P(X_1=\ov{h}^{-1}(X))=\frac{1}{2{\mathsf{N}}}$. Now, by Lemma~\ref{lm:h},
284 $h(\ov{h}^{-1}(X))\neq h(X)$. Therefore $\P(S_{x,\ell}=2\mid
285 X_1=\ov{h}^{-1}(X))=\frac{1}{2{\mathsf{N}}}$, proving that $\P(S_{x,\ell}\leq 2)\geq
286 \frac{1}{4{\mathsf{N}}^2}$.
292 Therefore, $\P(S_{X,\ell}\geq 3)\leq 1-\frac{1}{4{\mathsf{N}}^2}$. By induction, one
293 has, for every $i$, $\P(S_{X,\ell}\geq 2i)\leq
294 \left(1-\frac{1}{4{\mathsf{N}}^2}\right)^i$.
296 since $S_{X,\ell}$ is positive, it is known~\cite[lemma 2.9]{proba}, that
297 $$E[S_{X,\ell}]=\sum_{i=1}^{+\infty}\P(S_{X,\ell}\geq i).$$
298 Since $\P(S_{X,\ell}\geq i)\geq \P(S_{X,\ell}\geq i+1)$, one has
299 $$E[S_{X,\ell}]=\sum_{i=1}^{+\infty}\P(S_{X,\ell}\geq i)\leq
300 \P(S_{X,\ell}\geq 1)+\P(S_{X,\ell}\geq 2)+2 \sum_{i=1}^{+\infty}\P(S_{X,\ell}\geq 2i).$$
302 $$E[S_{X,\ell}]\leq 1+1+2
303 \sum_{i=1}^{+\infty}\left(1-\frac{1}{4{\mathsf{N}}^2}\right)^i=2+2(4{\mathsf{N}}^2-1)=8{\mathsf{N}}^2,$$
304 which concludes the proof.
307 Let $\ts^\prime$ be the time used to get all the bits but one fair.
309 \begin{lmm}\label{lm:stopprime}
310 One has $E[\ts^\prime]\leq 4{\mathsf{N}} \ln ({\mathsf{N}}+1).$
314 This is a classical Coupon Collector's like problem. Let $W_i$ be the
315 random variable counting the number of moves done in the Markov chain while
316 we had exactly $i-1$ fair bits. One has $\ts^\prime=\sum_{i=1}^{{\mathsf{N}}-1}W_i$.
317 But when we are at position $X$ with $i-1$ fair bits, the probability of
318 obtaining a new fair bit is either $1-\frac{i-1}{{\mathsf{N}}}$ if $h(X)$ is fair,
319 or $1-\frac{i-2}{{\mathsf{N}}}$ if $h(X)$ is not fair.
322 $\P (W_i=k)\leq \left(\frac{i-1}{{\mathsf{N}}}\right)^{k-1} \frac{{\mathsf{N}}-i+2}{{\mathsf{N}}}.$
323 Consequently, we have $\P(W_i\geq k)\leq \left(\frac{i-1}{{\mathsf{N}}}\right)^{k-1} \frac{{\mathsf{N}}-i+2}{{\mathsf{N}}-i+1}.$
324 It follows that $E[W_i]=\sum_{k=1}^{+\infty} \P (W_i\geq k)\leq {\mathsf{N}} \frac{{\mathsf{N}}-i+2}{({\mathsf{N}}-i+1)^2}\leq \frac{4{\mathsf{N}}}{{\mathsf{N}}-i+2}$.
329 $E[W_i]\leq \frac{4{\mathsf{N}}}{{\mathsf{N}}-i+2}$. Therefore
330 $$E[\ts^\prime]=\sum_{i=1}^{{\mathsf{N}}-1}E[W_i]\leq
331 4{\mathsf{N}}\sum_{i=1}^{{\mathsf{N}}-1} \frac{1}{{\mathsf{N}}-i+2}=
332 4{\mathsf{N}}\sum_{i=3}^{{\mathsf{N}}+1}\frac{1}{i}.$$
334 But $\sum_{i=1}^{{\mathsf{N}}+1}\frac{1}{i}\leq 1+\ln({\mathsf{N}}+1)$. It follows that
335 $1+\frac{1}{2}+\sum_{i=3}^{{\mathsf{N}}+1}\frac{1}{i}\leq 1+\ln({\mathsf{N}}+1).$
338 4{\mathsf{N}} (-\frac{1}{2}+\ln({\mathsf{N}}+1))\leq
339 4{\mathsf{N}}\ln({\mathsf{N}}+1)$.
342 One can now prove Theorem~\ref{prop:stop}.
345 Since $\ts^\prime$ is the time used to obtain $\mathsf{N}-1$ fair bits.
346 Assume that the last unfair bit is $\ell$. One has
347 $\ts=\ts^\prime+S_{X_\tau,\ell}$, and therefore $E[\ts] =
348 E[\ts^\prime]+E[S_{X_\tau,\ell}]$. Therefore, Theorem~\ref{prop:stop} is a
349 direct application of lemma~\ref{prop:lambda} and~\ref{lm:stopprime}.
352 Now using Markov Inequality, one has $\P_X(\tau > t)\leq \frac{E[\tau]}{t}$.
353 With $t_n=32N^2+16N\ln (N+1)$, one obtains: $\P_X(\tau > t_n)\leq \frac{1}{4}$.
354 Therefore, using the defintion of $t_{\rm mix)}$ and
355 Theorem~\ref{thm-sst}, it follows that
356 $t_{\rm mix}\leq 32N^2+16N\ln (N+1)=O(N^2)$.
359 Notice that the calculus of the stationary time upper bound is obtained
360 under the following constraint: for each vertex in the $\mathsf{N}$-cube
361 there are one ongoing arc and one outgoing arc that are removed.
362 The calculus doesn't consider (balanced) Hamiltonian cycles, which
363 are more regular and more binding than this constraint.
365 is obtained using the coarse Markov Inequality. For the
366 classical (lazzy) random walk the $\mathsf{N}$-cube, without removing any
367 Hamiltonian cylce, the mixing time is in $\Theta(N\ln N)$.
368 We conjecture that in our context, the mixing time is also in $\Theta(N\ln
372 In this later context, we claim that the upper bound for the stopping time
373 should be reduced. This fact is studied in the next section.
375 \subsection{Practical Evaluation of Stopping Times}\label{sub:stop:exp}
377 Let be given a function $f: \Bool^{\mathsf{N}} \rightarrow \Bool^{\mathsf{N}}$
378 and an initial seed $x^0$.
379 The pseudo code given in algorithm~\ref{algo:stop} returns the smallest
380 number of iterations such that all elements $\ell\in \llbracket 1,{\mathsf{N}} \rrbracket$ are fair. It allows to deduce an approximation of $E[\ts]$
381 by calling this code many times with many instances of function and many
384 \begin{algorithm}[ht]
386 \KwIn{a function $f$, an initial configuration $x^0$ ($\mathsf{N}$ bits)}
387 \KwOut{a number of iterations $\textit{nbit}$}
389 $\textit{nbit} \leftarrow 0$\;
391 $\textit{fair}\leftarrow\emptyset$\;
392 \While{$\left\vert{\textit{fair}}\right\vert < \mathsf{N} $}
394 $ s \leftarrow \textit{Random}(\mathsf{N})$ \;
395 $\textit{image} \leftarrow f(x) $\;
396 \If{$\textit{Random}(1) \neq 0$ and $x[s] \neq \textit{image}[s]$}{
397 $\textit{fair} \leftarrow \textit{fair} \cup \{s\}$\;
398 $x[s] \leftarrow \textit{image}[s]$\;
400 $\textit{nbit} \leftarrow \textit{nbit}+1$\;
402 \Return{$\textit{nbit}$}\;
404 \caption{Pseudo Code of stoping time calculus }
408 Practically speaking, for each number $\mathsf{N}$, $ 3 \le \mathsf{N} \le 16$,
409 10 functions have been generaed according to method presented in section~\ref{sec:hamilton}. For each of them, the calculus of the approximation of $E[\ts]$
410 is executed 10000 times with a random seed. The Figure~\ref{fig:stopping:moy}
411 summarizes these results. In this one, a circle represents the
412 approximation of $E[\ts]$ for a given $\mathsf{N}$.
413 The line is the graph of the function $x \mapsto 2x\ln(2x+8)$.
415 be observed that the approximation is largely
416 smaller than the upper bound given in theorem~\ref{prop:stop}.
417 It can be further deduced that the conjecture of the previous section
418 is realistic according the graph of $x \mapsto 2x\ln(2x+8)$.
426 % \begin{array}{|*{14}{l|}}
428 % \mathsf{N} & 4 & 5 & 6 & 7& 8 & 9 & 10& 11 & 12 & 13 & 14 & 15 & 16 \\
430 % \mathsf{N} & 21.8 & 28.4 & 35.4 & 42.5 & 50 & 57.7 & 65.6& 73.5 & 81.6 & 90 & 98.3 & 107.1 & 115.7 \\
434 % \caption{Average Stopping Time}\label{table:stopping:moy}
439 \includegraphics[scale=0.5]{complexity}
440 \caption{Average Stopping Time Approximation}\label{fig:stopping:moy}
447 %%% TeX-master: "main"
448 %%% ispell-dictionary: "american"