+% % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$,
+% % which is defined for two distributions $\pi$ and $\mu$ on the same set
+% % $\Bool^n$ by:
+% % $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$
+% % It is known that
+% % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$
+
+% % Let then $M(x,\cdot)$ be the
+% % distribution induced by the $x$-th row of $M$. If the Markov chain
+% % induced by
+% % $M$ has a stationary distribution $\pi$, then we define
+% % $$d(t)=\max_{x\in\Bool^n}\tv{M^t(x,\cdot)-\pi}.$$
+% Intuitively $d(t)$ is the largest deviation between
+% the distribution $\pi$ and $M^t(x,\cdot)$, which
+% is the result of iterating $t$ times the function.
+% Finally, let $\varepsilon$ be a positive number, the \emph{mixing time}
+% with respect to $\varepsilon$ is given by
+% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
+% It defines the smallest iteration number
+% that is sufficient to obtain a deviation lesser than $\varepsilon$.
+% Notice that the upper and lower bounds of mixing times cannot
+% directly be computed with eigenvalues formulae as expressed
+% in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work
+% only consider reversible Markov matrices whereas we do no restrict our
+% matrices to such a form.
+
+
+
+
+
+
+
+This section considers functions $f: \Bool^{\mathsf{N}} \rightarrow \Bool^{\mathsf{N}} $
+issued from an hypercube where an Hamiltonian path has been removed.
+A specific random walk in this modified hypercube is first
+introduced. We further detail
+a theoretical study on the length of the path
+which is sufficient to follow to get a uniform distribution.
+Notice that for a general references on Markov chains
+see~\cite{LevinPeresWilmer2006}
+, and particularly Chapter~5 on stopping times.
+
+
+
+
+First of all, let $\pi$, $\mu$ be two distributions on $\Bool^{\mathsf{N}}$. The total
+variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is
+defined by
+$$\tv{\pi-\mu}=\max_{A\subset \Bool^{\mathsf{N}}} |\pi(A)-\mu(A)|.$$ It is known that
+$$\tv{\pi-\mu}=\frac{1}{2}\sum_{X\in\Bool^{\mathsf{N}}}|\pi(X)-\mu(X)|.$$ Moreover, if
+$\nu$ is a distribution on $\Bool^{\mathsf{N}}$, one has
+$$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$
+
+Let $P$ be the matrix of a Markov chain on $\Bool^{\mathsf{N}}$. $P(X,\cdot)$ is the
+distribution induced by the $X$-th row of $P$. If the Markov chain induced by
+$P$ has a stationary distribution $\pi$, then we define
+$$d(t)=\max_{X\in\Bool^{\mathsf{N}}}\tv{P^t(X,\cdot)-\pi}.$$
+
+and
+
+$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
+One can prove that
+
+$$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$
+
+
+
+
+% It is known that $d(t+1)\leq d(t)$. \JFC{references ? Cela a-t-il
+% un intérêt dans la preuve ensuite.}
+
+
+
+%and
+% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
+% One can prove that \JFc{Ou cela a-t-il été fait?}
+% $$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$