+% \vspace{-0.5em}
+% It is easy to associate a Markov Matrix $M$ to such a graph $G(f)$
+% as follows:
+
+% $M_{ij} = \frac{1}{n}$ if there is an edge from $i$ to $j$ in $\Gamma(f)$ and $i \neq j$; $M_{ii} = 1 - \sum\limits_{j=1, j\neq i}^n M_{ij}$; and $M_{ij} = 0$ otherwise.
+
+% \begin{xpl}
+% The Markov matrix associated to the function $f^*$ is
+
+% \[
+% M=\dfrac{1}{3} \left(
+% \begin{array}{llllllll}
+% 1&1&1&0&0&0&0&0 \\
+% 1&1&0&0&0&1&0&0 \\
+% 0&0&1&1&0&0&1&0 \\
+% 0&1&1&1&0&0&0&0 \\
+% 1&0&0&0&1&0&1&0 \\
+% 0&0&0&0&1&1&0&1 \\
+% 0&0&0&0&1&0&1&1 \\
+% 0&0&0&1&0&1&0&1
+% \end{array}
+% \right)
+% \]
+%\end{xpl}
+
+Let thus be given such kind of map.
+This article focusses on studying its iterations according to
+the equation~(\ref{eq:asyn}) with a given strategy.
+First of all, this can be interpreted as walking into its iteration graph
+where the choice of the edge to follow is decided by the strategy.
+Notice that the iteration graph is always a subgraph of
+$n$-cube augemented with all the self-loop, \textit{i.e.}, all the
+edges $(v,v)$ for any $v \in \Bool^n$.
+Next, if we add probabilities on the transition graph, iterations can be
+interpreted as Markov chains.
+
+
+
+
+Let $\pi$, $\mu$ be two distribution on a same set $\Omega$. The total
+variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is
+defined by
+$$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ It is known that
+$$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ Moreover, if
+$\nu$ is a distribution on $\Omega$, one has
+$$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$
+
+Let $P$ be the matrix of a markov chain on $\Omega$. $P(x,\cdot)$ is the
+distribution induced by the $x$-th row of $P$. If the markov chain induced by
+$P$ has a stationary distribution $\pi$, then we define
+$$d(t)=\max_{x\in\Omega}\tv{P^t(x,\cdot)-\pi},$$
+and