-\documentclass{article}
-%\usepackage{prentcsmacro}
-%\sloppy
-\usepackage[a4paper]{geometry}
-\geometry{hmargin=3cm, vmargin=3cm }
-
-\usepackage[latin1]{inputenc}
-\usepackage[T1]{fontenc}
-\usepackage[english]{babel}
-\usepackage{amsmath,amssymb,latexsym,eufrak,euscript}
-\usepackage{subfigure,pstricks,pst-node,pst-coil}
-
-
-\usepackage{url,tikz}
-\usepackage{pgflibrarysnakes}
-
-\usepackage{multicol}
-
-\usetikzlibrary{arrows}
-\usetikzlibrary{automata}
-\usetikzlibrary{snakes}
-\usetikzlibrary{shapes}
-
-%% \setlength{\oddsidemargin}{15mm}
-%% \setlength{\evensidemargin}{15mm} \setlength{\textwidth}{140mm}
-%% \setlength{\textheight}{219mm} \setlength{\topmargin}{5mm}
-\newtheorem{theorem}{Theorem}
-%\newtheorem{definition}[theorem]{Definition}
-% %\newtheorem{defis}[thm]{D\'efinitions}
- \newtheorem{example}[theorem]{Example}
-% %\newtheorem{Exes}[thm]{Exemples}
-\newtheorem{lemma}[theorem]{Lemma}
-\newtheorem{proposition}[theorem]{Proposition}
-\newtheorem{construction}[theorem]{Construction}
-\newtheorem{corollary}[theorem]{Corollary}
-% \newtheorem{algor}[thm]{Algorithm}
-%\newtheorem{propdef}[thm]{Proposition-D\'efinition}
-\newcommand{\mlabel}[1]{\label{#1}\marginpar{\fbox{#1}}}
-\newcommand{\flsup}[1]{\stackrel{#1}{\longrightarrow}}
-
-\newcommand{\stirlingtwo}[2]{\genfrac{\lbrace}{\rbrace}{0pt}{}{#1}{#2}}
-\newcommand{\stirlingone}[2]{\genfrac{\lbrack}{\rbrack}{0pt}{}{#1}{#2}}
-
-\newenvironment{algo}
-{ \vspace{1em}
-\begin{algor}\mbox
-\newline \vspace{-0.1em}
-\begin{quote}\begin{rm}}
-{\end{rm}\end{quote}\end{algor}\vspace{-1.5em}\vspace{2em}}
-%\null \hfill $\diamondsuit$ \par\medskip \vspace{1em}}
-
-\newenvironment{exe}
-{\begin{example}\rm }
-{\end{example}
-%\vspace*{-1.5em}
-%\null \hfill $\triangledown$ \par\medskip}
-%\null \hfill $\triangledown$ \par\medskip \vspace{1em}}
-}
-
-
-\newenvironment{proof}
-{ \noindent {\sc Proof.\/} }
-{\null \hfill $\Box$ \par\medskip \vspace{1em}}
-
-
-
-\newcommand {\tv}[1] {\lVert #1 \rVert_{\rm TV}}
-\def \top {1.8}
-\def \topt {2.3}
-\def \P {\mathbb{P}}
-\def \ov {\overline}
-\def \ts {\tau_{\rm stop}}
-\begin{document}
-\label{firstpage}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\section{Mathematical Backgroung}
-
-
-
-Let $\pi$, $\mu$ be two distribution on a same set $\Omega$. The total
+
+
+
+Let thus be given such kind of map.
+This article focuses on studying its iterations according to
+the equation~(\ref{eq:asyn}) with a given strategy.
+First of all, this can be interpreted as walking into its iteration graph
+where the choice of the edge to follow is decided by the strategy.
+Notice that the iteration graph is always a subgraph of
+${\mathsf{N}}$-cube augmented with all the self-loop, \textit{i.e.}, all the
+edges $(v,v)$ for any $v \in \Bool^{\mathsf{N}}$.
+Next, if we add probabilities on the transition graph, iterations can be
+interpreted as Markov chains.
+
+\begin{xpl}
+Let us consider for instance
+the graph $\Gamma(f)$ defined
+in \textsc{Figure~\ref{fig:iteration:f*}.} and
+the probability function $p$ defined on the set of edges as follows:
+$$
+p(e) \left\{
+\begin{array}{ll}
+= \frac{1}{3} \textrm{ if $e=(v,v)$ with $v \in \Bool^3$,}\\
+= \frac{1}{3} \textrm{ otherwise.}
+\end{array}
+\right.
+$$
+The matrix $P$ of the Markov chain associated to the function $f^*$ and to its probability function $p$ is
+\[
+P=\dfrac{1}{3} \left(
+\begin{array}{llllllll}
+1&1&1&0&0&0&0&0 \\
+1&1&0&0&0&1&0&0 \\
+0&0&1&1&0&0&1&0 \\
+0&1&1&1&0&0&0&0 \\
+1&0&0&0&1&0&1&0 \\
+0&0&0&0&1&1&0&1 \\
+0&0&0&0&1&0&1&1 \\
+0&0&0&1&0&1&0&1
+\end{array}
+\right)
+\]
+\end{xpl}
+
+
+% % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$,
+% % which is defined for two distributions $\pi$ and $\mu$ on the same set
+% % $\Bool^n$ by:
+% % $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$
+% % It is known that
+% % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$
+
+% % Let then $M(x,\cdot)$ be the
+% % distribution induced by the $x$-th row of $M$. If the Markov chain
+% % induced by
+% % $M$ has a stationary distribution $\pi$, then we define
+% % $$d(t)=\max_{x\in\Bool^n}\tv{M^t(x,\cdot)-\pi}.$$
+% Intuitively $d(t)$ is the largest deviation between
+% the distribution $\pi$ and $M^t(x,\cdot)$, which
+% is the result of iterating $t$ times the function.
+% Finally, let $\varepsilon$ be a positive number, the \emph{mixing time}
+% with respect to $\varepsilon$ is given by
+% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
+% It defines the smallest iteration number
+% that is sufficient to obtain a deviation lesser than $\varepsilon$.
+% Notice that the upper and lower bounds of mixing times cannot
+% directly be computed with eigenvalues formulae as expressed
+% in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work
+% only consider reversible Markov matrices whereas we do no restrict our
+% matrices to such a form.
+
+
+
+
+
+
+
+This section considers functions $f: \Bool^{\mathsf{N}} \rightarrow \Bool^{\mathsf{N}} $
+issued from an hypercube where an Hamiltonian path has been removed.
+A specific random walk in this modified hypercube is first
+introduced. We further detail
+a theoretical study on the length of the path
+which is sufficient to follow to get a uniform distribution.
+
+
+
+
+
+First of all, let $\pi$, $\mu$ be two distributions on $\Bool^{\mathsf{N}}$. The total