1 In what follows, we consider the Boolean algebra on the set
2 $\Bool=\{0,1\}$ with the classical operators of conjunction '.',
3 of disjunction '+', of negation '$\overline{~}$', and of
4 disjunctive union $\oplus$.
6 Let $n$ be a positive integer. A {\emph{Boolean map} $f$ is
7 a function from $\Bool^n$
10 $x=(x_1,\dots,x_n)$ maps to $f(x)=(f_1(x),\dots,f_n(x))$.
11 Functions are iterated as follows.
12 At the $t^{th}$ iteration, only the $s_{t}-$th component is said to be
13 ``iterated'', where $s = \left(s_t\right)_{t \in \mathds{N}}$ is a sequence of indices taken in $\llbracket 1;n \rrbracket$ called ``strategy''.
15 let $F_f: \llbracket1;n\rrbracket\times \Bool^{n}$ to $\Bool^n$ be defined by
17 F_f(i,x)=(x_1,\dots,x_{i-1},f_i(x),x_{i+1},\dots,x_n).
19 Then, let $x^0\in\Bool^n$ be an initial configuration
21 \llbracket1;n\rrbracket^\Nats$ be a strategy,
22 the dynamics are described by the recurrence
23 \begin{equation}\label{eq:asyn}
28 Let be given a Boolean map $f$. Its associated
29 {\emph{iteration graph}} $\Gamma(f)$ is the
30 directed graph such that the set of vertices is
31 $\Bool^n$, and for all $x\in\Bool^n$ and $i\in \llbracket1;n\rrbracket$,
32 the graph $\Gamma(f)$ contains an arc from $x$ to $F_f(i,x)$.
35 Let us consider for instance $n=3$.
37 $f^*: \Bool^3 \rightarrow \Bool^3$ be defined by
39 (x_2 \oplus x_3, \overline{x_1}\overline{x_3} + x_1\overline{x_2},
40 \overline{x_1}\overline{x_3} + x_1x_2)$.
41 The iteration graph $\Gamma(f^*)$ of this function is given in
42 Figure~\ref{fig:iteration:f*}.
47 \includegraphics[scale=0.5]{images/iter_f0c}
50 \caption{Iteration Graph $\Gamma(f^*)$ of the function $f^*$}\label{fig:iteration:f*}
55 % It is easy to associate a Markov Matrix $M$ to such a graph $G(f)$
58 % $M_{ij} = \frac{1}{n}$ if there is an edge from $i$ to $j$ in $\Gamma(f)$ and $i \neq j$; $M_{ii} = 1 - \sum\limits_{j=1, j\neq i}^n M_{ij}$; and $M_{ij} = 0$ otherwise.
61 % The Markov matrix associated to the function $f^*$ is
64 % M=\dfrac{1}{3} \left(
65 % \begin{array}{llllllll}
79 Let thus be given such kind of map.
80 This article focusses on studying its iterations according to
81 the equation~(\ref{eq:asyn}) with a given strategy.
82 First of all, this can be interpreted as walking into its iteration graph
83 where the choice of the edge to follow is decided by the strategy.
84 Notice that the iteration graph is always a subgraph of
85 $n$-cube augemented with all the self-loop, \textit{i.e.}, all the
86 edges $(v,v)$ for any $v \in \Bool^n$.
87 Next, if we add probabilities on the transition graph, iterations can be
88 interpreted as Markov chains.
93 Let $\pi$, $\mu$ be two distribution on a same set $\Omega$. The total
94 variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is
96 $$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ It is known that
97 $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ Moreover, if
98 $\nu$ is a distribution on $\Omega$, one has
99 $$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$
101 Let $P$ be the matrix of a markov chain on $\Omega$. $P(x,\cdot)$ is the
102 distribution induced by the $x$-th row of $P$. If the markov chain induced by
103 $P$ has a stationary distribution $\pi$, then we define
104 $$d(t)=\max_{x\in\Omega}\tv{P^t(x,\cdot)-\pi},$$
107 $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
110 $$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$
112 It is known that $d(t+1)\leq d(t)$.
116 Let $(X_t)_{t\in \mathbb{N}}$ be a sequence of $\Omega$ valued random
117 variables. A $\mathbb{N}$-valued random variable $\tau$ is a {\it stopping
118 time} for the sequence $(X_i)$ if for each $t$ there exists $B_t\subseteq
119 \omega^{t+1}$ such that $\{tau=t\}=\{(X_0,X_1,\ldots,X_t)\in B_t\}$.
121 Let $(X_t)_{t\in \mathbb{N}}$ be a markov chain and $f(X_{t-1},Z_t)$ a
122 random mapping representation of the markov chain. A {\it randomized
123 stopping time} for the markov chain is a stopping time for
124 $(Z_t)_{t\in\mathbb{N}}$. It he markov chain is irreductible and has $\pi$
125 as stationary distribution, then a {\it stationay time} $\tau$ is a
126 randomized stopping time (possibily depending on the starting position $x$),
127 such that the distribution of $X_\tau$ is $\pi$:
128 $$\P_x(X_\tau=y)=\pi(y).$$
131 \JFC{Ou ceci a-t-il ete prouvé}
133 If $\tau$ is a strong stationary time, then $d(t)\leq \max_{x\in\Omega}
137 % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$,
138 % which is defined for two distributions $\pi$ and $\mu$ on the same set
140 % $$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$
142 % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$
144 % Let then $M(x,\cdot)$ be the
145 % distribution induced by the $x$-th row of $M$. If the Markov chain
147 % $M$ has a stationary distribution $\pi$, then we define
148 % $$d(t)=\max_{x\in\Omega}\tv{M^t(x,\cdot)-\pi}.$$
149 Intuitively $d(t)$ is the largest deviation between
150 the distribution $\pi$ and $M^t(x,\cdot)$, which
151 is the result of iterating $t$ times the function.
152 Finally, let $\varepsilon$ be a positive number, the \emph{mixing time}
153 with respect to $\varepsilon$ is given by
154 $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
155 It defines the smallest iteration number
156 that is sufficient to obtain a deviation lesser than $\varepsilon$.
157 % Notice that the upper and lower bounds of mixing times cannot
158 % directly be computed with eigenvalues formulae as expressed
159 % in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work
160 % only consider reversible Markov matrices whereas we do no restrict our
161 % matrices to such a form.
165 Let us finally present the pseudorandom number generator $\chi_{\textit{14Secrypt}}$
166 which is based on random walks in $\Gamma(f)$.
167 More precisely, let be given a Boolean map $f:\Bool^n \rightarrow \Bool^n$,
168 a PRNG \textit{Random},
169 an integer $b$ that corresponds to an awaited mixing time, and
170 an initial configuration $x^0$.
171 Starting from $x^0$, the algorithm repeats $b$ times
172 a random choice of which edge to follow and traverses this edge.
173 The final configuration is thus outputted.
174 This PRNG is formalized in Algorithm~\ref{CI Algorithm}.
179 \begin{algorithm}[ht]
181 \KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$ ($n$ bits)}
182 \KwOut{a configuration $x$ ($n$ bits)}
184 \For{$i=0,\dots,b-1$}
186 $s\leftarrow{\textit{Random}(n)}$\;
187 $x\leftarrow{F_f(s,x)}$\;
191 \caption{Pseudo Code of the $\chi_{\textit{14Secrypt}}$ PRNG}
195 This PRNG is a particularized version of Algorithm given in~\cite{BCGR11}.
196 Compared to this latter, the length of the random
197 walk of our algorithm is always constant (and is equal to $b$) whereas it
198 was given by a second PRNG in this latter.
199 However, all the theoretical results that are given in~\cite{BCGR11} remain
200 true since the proofs do not rely on this fact.
202 Let $f: \Bool^{n} \rightarrow \Bool^{n}$.
203 It has been shown~\cite[Th. 4, p. 135]{BCGR11}} that
204 if its iteration graph is strongly connected, then
205 the output of $\chi_{\textit{14Secrypt}}$ follows
206 a law that tends to the uniform distribution
207 if and only if its Markov matrix is a doubly stochastic matrix.
209 Let us now present a method to
211 with Doubly Stochastic matrix and Strongly Connected iteration graph,
212 denoted as DSSC matrix.