1 In what follows, we consider the Boolean algebra on the set
2 $\Bool=\{0,1\}$ with the classical operators of conjunction '.',
3 of disjunction '+', of negation '$\overline{~}$', and of
4 disjunctive union $\oplus$.
6 Let $n$ be a positive integer. A {\emph{Boolean map} $f$ is
7 a function from $\Bool^n$
10 $x=(x_1,\dots,x_n)$ maps to $f(x)=(f_1(x),\dots,f_n(x))$.
11 Functions are iterated as follows.
12 At the $t^{th}$ iteration, only the $s_{t}-$th component is said to be
13 ``iterated'', where $s = \left(s_t\right)_{t \in \mathds{N}}$ is a sequence of indices taken in $\llbracket 1;n \rrbracket$ called ``strategy''.
15 let $F_f: \llbracket1;n\rrbracket\times \Bool^{n}$ to $\Bool^n$ be defined by
17 F_f(i,x)=(x_1,\dots,x_{i-1},f_i(x),x_{i+1},\dots,x_n).
19 Then, let $x^0\in\Bool^n$ be an initial configuration
21 \llbracket1;n\rrbracket^\Nats$ be a strategy,
22 the dynamics are described by the recurrence
23 \begin{equation}\label{eq:asyn}
28 Let be given a Boolean map $f$. Its associated
29 {\emph{iteration graph}} $\Gamma(f)$ is the
30 directed graph such that the set of vertices is
31 $\Bool^n$, and for all $x\in\Bool^n$ and $i\in \llbracket1;n\rrbracket$,
32 the graph $\Gamma(f)$ contains an arc from $x$ to $F_f(i,x)$.
35 Let us consider for instance $n=3$.
37 $f^*: \Bool^3 \rightarrow \Bool^3$ be defined by
39 (x_2 \oplus x_3, \overline{x_1}\overline{x_3} + x_1\overline{x_2},
40 \overline{x_1}\overline{x_3} + x_1x_2)$.
41 The iteration graph $\Gamma(f^*)$ of this function is given in
42 Figure~\ref{fig:iteration:f*}.
47 \includegraphics[scale=0.5]{images/iter_f0c}
50 \caption{Iteration Graph $\Gamma(f^*)$ of the function $f^*$}\label{fig:iteration:f*}
55 Let thus be given such kind of map.
56 This article focuses on studying its iterations according to
57 the equation~(\ref{eq:asyn}) with a given strategy.
58 First of all, this can be interpreted as walking into its iteration graph
59 where the choice of the edge to follow is decided by the strategy.
60 Notice that the iteration graph is always a subgraph of
61 $n$-cube augmented with all the self-loop, \textit{i.e.}, all the
62 edges $(v,v)$ for any $v \in \Bool^n$.
63 Next, if we add probabilities on the transition graph, iterations can be
64 interpreted as Markov chains.
67 Let us consider for instance
68 the graph $\Gamma(f)$ defined
69 in \textsc{Figure~\ref{fig:iteration:f*}.} and
70 the probability function $p$ defined on the set of edges as follows:
74 = \frac{2}{3} \textrm{ if $e=(v,v)$ with $v \in \Bool^3$,}\\
75 = \frac{1}{6} \textrm{ otherwise.}
79 The matrix $P$ of the Markov chain associated to the function $f^*$ and to its probability function $p$ is
82 \begin{array}{llllllll}
98 More generally, let $\pi$, $\mu$ be two distribution on $\Bool^n$. The total
99 variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is
101 $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$ It is known that
102 $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$ Moreover, if
103 $\nu$ is a distribution on $\Bool^n$, one has
104 $$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$
106 Let $P$ be the matrix of a Markov chain on $\Bool^n$. $P(x,\cdot)$ is the
107 distribution induced by the $x$-th row of $P$. If the Markov chain induced by
108 $P$ has a stationary distribution $\pi$, then we define
109 $$d(t)=\max_{x\in\Bool^n}\tv{P^t(x,\cdot)-\pi}.$$
110 It is known that $d(t+1)\leq d(t)$. \JFC{references ? Cela a-t-il
111 un intérêt dans la preuve ensuite.}
116 % $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
117 % One can prove that \JFc{Ou cela a-t-il été fait?}
118 % $$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$
122 Let $(X_t)_{t\in \mathbb{N}}$ be a sequence of $\Bool^n$ valued random
123 variables. A $\mathbb{N}$-valued random variable $\tau$ is a {\it stopping
124 time} for the sequence $(X_i)$ if for each $t$ there exists $B_t\subseteq
125 \Omega^{t+1}$ such that $\{\tau=t\}=\{(X_0,X_1,\ldots,X_t)\in B_t\}$.
126 In other words, the event $\{\tau = t \}$ only depends on the values of
127 $(X_0,X_1,\ldots,X_t)$, not on $X_k$ with $k > t$.
130 \JFC{Je ne comprends pas la definition de randomized stopping time, Peut-on enrichir ?}
132 Let $(X_t)_{t\in \mathbb{N}}$ be a Markov chain and $f(X_{t-1},Z_t)$ a
133 random mapping representation of the Markov chain. A {\it randomized
134 stopping time} for the Markov chain is a stopping time for
135 $(Z_t)_{t\in\mathbb{N}}$. If the Markov chain is irreducible and has $\pi$
136 as stationary distribution, then a {\it stationary time} $\tau$ is a
137 randomized stopping time (possibly depending on the starting position $x$),
138 such that the distribution of $X_\tau$ is $\pi$:
139 $$\P_x(X_\tau=y)=\pi(y).$$
142 \JFC{Ou ceci a-t-il ete prouvé}
144 If $\tau$ is a strong stationary time, then $d(t)\leq \max_{x\in\Bool^n}
148 % % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$,
149 % % which is defined for two distributions $\pi$ and $\mu$ on the same set
151 % % $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$
153 % % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$
155 % % Let then $M(x,\cdot)$ be the
156 % % distribution induced by the $x$-th row of $M$. If the Markov chain
158 % % $M$ has a stationary distribution $\pi$, then we define
159 % % $$d(t)=\max_{x\in\Bool^n}\tv{M^t(x,\cdot)-\pi}.$$
160 % Intuitively $d(t)$ is the largest deviation between
161 % the distribution $\pi$ and $M^t(x,\cdot)$, which
162 % is the result of iterating $t$ times the function.
163 % Finally, let $\varepsilon$ be a positive number, the \emph{mixing time}
164 % with respect to $\varepsilon$ is given by
165 % $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
166 % It defines the smallest iteration number
167 % that is sufficient to obtain a deviation lesser than $\varepsilon$.
168 % Notice that the upper and lower bounds of mixing times cannot
169 % directly be computed with eigenvalues formulae as expressed
170 % in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work
171 % only consider reversible Markov matrices whereas we do no restrict our
172 % matrices to such a form.