introduced. We further detail
a theoretical study on the length of the path
which is sufficient to follow to get a uniform distribution.
-
+Notice that for a general references on Markov chains
+see~\cite{LevinPeresWilmer2006}
+, and particularly Chapter~5 on stopping times.
such that the distribution of $X_\tau$ is $\pi$:
$$\P_X(X_\tau=Y)=\pi(Y).$$
+A stopping time $\tau$ is a {\emph strong stationary time} if $X_{\tau}$ is
+independent of $\tau$.
+
\begin{thrm}
If $\tau$ is a strong stationary time, then $d(t)\leq \max_{X\in\Bool^{\mathsf{N}}}
\begin{thrm} \label{prop:stop}
If $\ov{h}$ is bijective and square-free, then
-$E[\ts]\leq 8{\mathsf{N}}^2+ {\mathsf{N}}\ln ({\mathsf{N}}+1)$.
+$E[\ts]\leq 8{\mathsf{N}}^2+ 4{\mathsf{N}}\ln ({\mathsf{N}}+1)$.
\end{thrm}
For each $X\in \Bool^{\mathsf{N}}$ and $\ell\in\llbracket 1,{\mathsf{N}}\rrbracket$,
$\ell$ is fair. More formally
$$S_{X,\ell}=\min \{t \geq 1\mid h(X_{t-1})\neq \ell\text{ and }Z_t=(\ell,.)\text{ and } X_0=X\}.$$
- We denote by
-$$\lambda_h=\max_{X,\ell} S_{X,\ell}.$$
+% We denote by
+% $$\lambda_h=\max_{X,\ell} S_{X,\ell}.$$
\begin{lmm}\label{prop:lambda}
-If $\ov{h}$ is a square-free bijective function, then the inequality
-$E[\lambda_h]\leq 8{\mathsf{N}}^2$ is established.
-
+Let $\ov{h}$ is a square-free bijective function. Then
+for all $X$ and
+all $\ell$,
+the inequality
+$E[S_{X,\ell}]\leq 8{\mathsf{N}}^2$ is established.
\end{lmm}
\begin{proof}
$\P(S_{X,\ell}=1)=\frac{1}{2{\mathsf{N}}}\geq \frac{1}{4{\mathsf{N}}^2}$.
\item otherwise, $h(X)=\ell$, then
$\P(S_{X,\ell}=1)=0$.
-But in this case, intutively, it is possible to move
+But in this case, intuitively, it is possible to move
from $X$ to $\ov{h}^{-1}(X)$ (with probability $\frac{1}{2N}$). And in
$\ov{h}^{-1}(X)$ the $l$-th bit can be switched.
More formally,
which concludes the proof.
\end{proof}
-Let $\ts^\prime$ be the first time that there are exactly ${\mathsf{N}}-1$ fair
-elements.
+Let $\ts^\prime$ be the time used to get all the bits but one fair.
\begin{lmm}\label{lm:stopprime}
-One has $E[\ts^\prime]\leq {\mathsf{N}} \ln ({\mathsf{N}}+1).$
+One has $E[\ts^\prime]\leq 4{\mathsf{N}} \ln ({\mathsf{N}}+1).$
\end{lmm}
\begin{proof}
we had exactly $i-1$ fair bits. One has $\ts^\prime=\sum_{i=1}^{{\mathsf{N}}-1}W_i$.
But when we are at position $X$ with $i-1$ fair bits, the probability of
obtaining a new fair bit is either $1-\frac{i-1}{{\mathsf{N}}}$ if $h(X)$ is fair,
- or $1-\frac{i-2}{{\mathsf{N}}}$ if $h(X)$ is not fair. It follows that
-$E[W_i]\leq \frac{{\mathsf{N}}}{{\mathsf{N}}-i+2}$. Therefore
-$$E[\ts^\prime]=\sum_{i=1}^{{\mathsf{N}}-1}E[W_i]\leq {\mathsf{N}}\sum_{i=1}^{{\mathsf{N}}-1}
- \frac{1}{{\mathsf{N}}-i+2}={\mathsf{N}}\sum_{i=3}^{{\mathsf{N}}+1}\frac{1}{i}.$$
+ or $1-\frac{i-2}{{\mathsf{N}}}$ if $h(X)$ is not fair.
+
+Therefore,
+$\P (W_i=k)\leq \left(\frac{i-1}{{\mathsf{N}}}\right)^{k-1} \frac{{\mathsf{N}}-i+2}{{\mathsf{N}}}.$
+Consequently, we have $\P(W_i\geq k)\leq \left(\frac{i-1}{{\mathsf{N}}}\right)^{k-1} \frac{{\mathsf{N}}-i+2}{{\mathsf{N}}-i+1}.$
+It follows that $E[W_i]=\sum_{k=1}^{+\infty} \P (W_i\geq k)\leq {\mathsf{N}} \frac{{\mathsf{N}}-i+2}{({\mathsf{N}}-i+1)^2}\leq \frac{4{\mathsf{N}}}{{\mathsf{N}}-i+2}$.
+
+
+
+It follows that
+$E[W_i]\leq \frac{4{\mathsf{N}}}{{\mathsf{N}}-i+2}$. Therefore
+$$E[\ts^\prime]=\sum_{i=1}^{{\mathsf{N}}-1}E[W_i]\leq
+4{\mathsf{N}}\sum_{i=1}^{{\mathsf{N}}-1} \frac{1}{{\mathsf{N}}-i+2}=
+4{\mathsf{N}}\sum_{i=3}^{{\mathsf{N}}+1}\frac{1}{i}.$$
But $\sum_{i=1}^{{\mathsf{N}}+1}\frac{1}{i}\leq 1+\ln({\mathsf{N}}+1)$. It follows that
$1+\frac{1}{2}+\sum_{i=3}^{{\mathsf{N}}+1}\frac{1}{i}\leq 1+\ln({\mathsf{N}}+1).$
Consequently,
-$E[\ts^\prime]\leq {\mathsf{N}} (-\frac{1}{2}+\ln({\mathsf{N}}+1))\leq {\mathsf{N}}\ln({\mathsf{N}}+1)$.
+$E[\ts^\prime]\leq
+4{\mathsf{N}} (-\frac{1}{2}+\ln({\mathsf{N}}+1))\leq
+4{\mathsf{N}}\ln({\mathsf{N}}+1)$.
\end{proof}
One can now prove Theorem~\ref{prop:stop}.
\begin{proof}
-One has $\ts\leq \ts^\prime+\lambda_h$. Therefore,
+Since $\ts^\prime$ is the time used to obtain $\mathsf{N}-1$ fair bits.
+Assume that the last unfair bit is $\ell$. One has
+$\ts=\ts^\prime+S_{X_\tau,\ell}$, and therefore
+$E[\ts] = E[\ts^\prime]+E[S_{X_\tau,\ell}]$. Therefore,
Theorem~\ref{prop:stop} is a direct application of
lemma~\ref{prop:lambda} and~\ref{lm:stopprime}.
\end{proof}
Notice that the calculus of the stationary time upper bound is obtained
under the following constraint: for each vertex in the $\mathsf{N}$-cube
there are one ongoing arc and one outgoing arc that are removed.
-The calculus does not consider (balanced) hamiltonian cycles, which
+The calculus does not consider (balanced) Hamiltonian cycles, which
are more regular and more binding than this constraint.
In this later context, we claim that the upper bound for the stopping time
should be reduced.