\end{figure}
\end{xpl}
-% \vspace{-0.5em}
-% It is easy to associate a Markov Matrix $M$ to such a graph $G(f)$
-% as follows:
-
-% $M_{ij} = \frac{1}{n}$ if there is an edge from $i$ to $j$ in $\Gamma(f)$ and $i \neq j$; $M_{ii} = 1 - \sum\limits_{j=1, j\neq i}^n M_{ij}$; and $M_{ij} = 0$ otherwise.
-
-% \begin{xpl}
-% The Markov matrix associated to the function $f^*$ is
-
-% \[
-% M=\dfrac{1}{3} \left(
-% \begin{array}{llllllll}
-% 1&1&1&0&0&0&0&0 \\
-% 1&1&0&0&0&1&0&0 \\
-% 0&0&1&1&0&0&1&0 \\
-% 0&1&1&1&0&0&0&0 \\
-% 1&0&0&0&1&0&1&0 \\
-% 0&0&0&0&1&1&0&1 \\
-% 0&0&0&0&1&0&1&1 \\
-% 0&0&0&1&0&1&0&1
-% \end{array}
-% \right)
-% \]
-%\end{xpl}
Let thus be given such kind of map.
This article focusses on studying its iterations according to
Next, if we add probabilities on the transition graph, iterations can be
interpreted as Markov chains.
+\begin{xpl}
+Let us consider for instance
+the graph $\Gamma(f)$ defined
+in \textsc{Figure~\ref{fig:iteration:f*}.} and
+the probability function $p$ defined on the set of edges as follows:
+$$
+p(e) \left\{
+\begin{array}{ll}
+= \frac{2}{3} \textrm{ if $e=(v,v)$ with $v \in \Bool^3$,}\\
+= \frac{1}{6} \textrm{ otherwise.}
+\end{array}
+\right.
+$$
+The matrix $P$ of the Markov chain associated to the function $f^*$ and to its probability function $p$ is
+\[
+P=\dfrac{1}{6} \left(
+\begin{array}{llllllll}
+4&1&1&0&0&0&0&0 \\
+1&4&0&0&0&1&0&0 \\
+0&0&4&1&0&0&1&0 \\
+0&1&1&4&0&0&0&0 \\
+1&0&0&0&4&0&1&0 \\
+0&0&0&0&1&4&0&1 \\
+0&0&0&0&1&0&4&1 \\
+0&0&0&1&0&1&0&4
+\end{array}
+\right)
+\]
+\end{xpl}
-Let $\pi$, $\mu$ be two distribution on a same set $\Omega$. The total
+More generally, let $\pi$, $\mu$ be two distribution on $\Bool^n$. The total
variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is
defined by
-$$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ It is known that
-$$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ Moreover, if
-$\nu$ is a distribution on $\Omega$, one has
+$$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$ It is known that
+$$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$ Moreover, if
+$\nu$ is a distribution on $\Bool^n$, one has
$$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$
-Let $P$ be the matrix of a markov chain on $\Omega$. $P(x,\cdot)$ is the
+Let $P$ be the matrix of a markov chain on $\Bool^n$. $P(x,\cdot)$ is the
distribution induced by the $x$-th row of $P$. If the markov chain induced by
$P$ has a stationary distribution $\pi$, then we define
-$$d(t)=\max_{x\in\Omega}\tv{P^t(x,\cdot)-\pi},$$
-and
+$$d(t)=\max_{x\in\Bool^n}\tv{P^t(x,\cdot)-\pi}.$$
+It is known that $d(t+1)\leq d(t)$. \JFC{référence ? Cela a-t-il
+un intérêt dans la preuve ensuite.}
-$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
-One can prove that
-$$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$
-It is known that $d(t+1)\leq d(t)$.
+%and
+% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
+% One can prove that \JFc{Ou cela a-t-il été fait?}
+% $$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$
-Let $(X_t)_{t\in \mathbb{N}}$ be a sequence of $\Omega$ valued random
+Let $(X_t)_{t\in \mathbb{N}}$ be a sequence of $\Bool^n$ valued random
variables. A $\mathbb{N}$-valued random variable $\tau$ is a {\it stopping
time} for the sequence $(X_i)$ if for each $t$ there exists $B_t\subseteq
-\omega^{t+1}$ such that $\{tau=t\}=\{(X_0,X_1,\ldots,X_t)\in B_t\}$.
+\Omega^{t+1}$ such that $\{\tau=t\}=\{(X_0,X_1,\ldots,X_t)\in B_t\}$.
+In other words, the event $\{\tau = t \}$ only depends on the values of
+$(X_0,X_1,\ldots,X_t)$, not on $X_k$ with $k > t$.
+
+
+\JFC{Je ne comprends pas la definition de randomized stopping time, Peut-on enrichir ?}
Let $(X_t)_{t\in \mathbb{N}}$ be a markov chain and $f(X_{t-1},Z_t)$ a
random mapping representation of the markov chain. A {\it randomized
stopping time} for the markov chain is a stopping time for
-$(Z_t)_{t\in\mathbb{N}}$. It he markov chain is irreductible and has $\pi$
-as stationary distribution, then a {\it stationay time} $\tau$ is a
+$(Z_t)_{t\in\mathbb{N}}$. If the markov chain is irreductible and has $\pi$
+as stationary distribution, then a {\it stationary time} $\tau$ is a
randomized stopping time (possibily depending on the starting position $x$),
such that the distribution of $X_\tau$ is $\pi$:
$$\P_x(X_\tau=y)=\pi(y).$$
\JFC{Ou ceci a-t-il ete prouvé}
\begin{Theo}
-If $\tau$ is a strong stationary time, then $d(t)\leq \max_{x\in\Omega}
+If $\tau$ is a strong stationary time, then $d(t)\leq \max_{x\in\Bool^n}
\P_x(\tau > t)$.
\end{Theo}
-% Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$,
-% which is defined for two distributions $\pi$ and $\mu$ on the same set
-% $\Omega$ by:
-% $$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$
-% It is known that
-% $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$
-
-% Let then $M(x,\cdot)$ be the
-% distribution induced by the $x$-th row of $M$. If the Markov chain
-% induced by
-% $M$ has a stationary distribution $\pi$, then we define
-% $$d(t)=\max_{x\in\Omega}\tv{M^t(x,\cdot)-\pi}.$$
-Intuitively $d(t)$ is the largest deviation between
-the distribution $\pi$ and $M^t(x,\cdot)$, which
-is the result of iterating $t$ times the function.
-Finally, let $\varepsilon$ be a positive number, the \emph{mixing time}
-with respect to $\varepsilon$ is given by
-$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
-It defines the smallest iteration number
-that is sufficient to obtain a deviation lesser than $\varepsilon$.
+% % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$,
+% % which is defined for two distributions $\pi$ and $\mu$ on the same set
+% % $\Bool^n$ by:
+% % $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$
+% % It is known that
+% % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$
+
+% % Let then $M(x,\cdot)$ be the
+% % distribution induced by the $x$-th row of $M$. If the Markov chain
+% % induced by
+% % $M$ has a stationary distribution $\pi$, then we define
+% % $$d(t)=\max_{x\in\Bool^n}\tv{M^t(x,\cdot)-\pi}.$$
+% Intuitively $d(t)$ is the largest deviation between
+% the distribution $\pi$ and $M^t(x,\cdot)$, which
+% is the result of iterating $t$ times the function.
+% Finally, let $\varepsilon$ be a positive number, the \emph{mixing time}
+% with respect to $\varepsilon$ is given by
+% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$
+% It defines the smallest iteration number
+% that is sufficient to obtain a deviation lesser than $\varepsilon$.
% Notice that the upper and lower bounds of mixing times cannot
% directly be computed with eigenvalues formulae as expressed
% in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work
-Let us finally present the pseudorandom number generator $\chi_{\textit{14Secrypt}}$
+Let us finally present the pseudorandom number generator $\chi_{\textit{15Rairo}}$
which is based on random walks in $\Gamma(f)$.
More precisely, let be given a Boolean map $f:\Bool^n \rightarrow \Bool^n$,
a PRNG \textit{Random},
-an integer $b$ that corresponds to an awaited mixing time, and
+an integer $b$ that corresponds an iteration number (\textit{i.e.}, the lenght of the walk), and
an initial configuration $x^0$.
Starting from $x^0$, the algorithm repeats $b$ times
a random choice of which edge to follow and traverses this edge.
-\vspace{-1em}
\begin{algorithm}[ht]
%\begin{scriptsize}
\KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$ ($n$ bits)}
$x\leftarrow x^0$\;
\For{$i=0,\dots,b-1$}
{
+\If{$\textit{Random}(1) \neq 0$}{
$s\leftarrow{\textit{Random}(n)}$\;
$x\leftarrow{F_f(s,x)}$\;
}
+}
return $x$\;
%\end{scriptsize}
-\caption{Pseudo Code of the $\chi_{\textit{14Secrypt}}$ PRNG}
+\caption{Pseudo Code of the $\chi_{\textit{15Rairo}}$ PRNG}
\label{CI Algorithm}
\end{algorithm}
-\vspace{-0.5em}
-This PRNG is a particularized version of Algorithm given in~\cite{BCGR11}.
-Compared to this latter, the length of the random
-walk of our algorithm is always constant (and is equal to $b$) whereas it
-was given by a second PRNG in this latter.
-However, all the theoretical results that are given in~\cite{BCGR11} remain
-true since the proofs do not rely on this fact.
+
+
+This PRNG is a particularized version of Algorithm given in~\cite{DBLP:conf/secrypt/CouchotHGWB14}.
+As this latter, the length of the random
+walk of our algorithm is always constant (and is equal to $b$).
+However, in the current version, we add the constraint that
+
Let $f: \Bool^{n} \rightarrow \Bool^{n}$.
It has been shown~\cite[Th. 4, p. 135]{BCGR11}} that
Let us now present a method to
generate functions
with Doubly Stochastic matrix and Strongly Connected iteration graph,
- denoted as DSSC matrix.
+denoted as DSSC matrix.