X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rairo15.git/blobdiff_plain/49ca2ad778013ab14ba1e6f9ab70697a97557e26..e878014111cef8e935255d892d3d132429d03469:/preliminaries.tex?ds=inline diff --git a/preliminaries.tex b/preliminaries.tex index 3b559b1..9441aab 100644 --- a/preliminaries.tex +++ b/preliminaries.tex @@ -51,109 +51,69 @@ Figure~\ref{fig:iteration:f*}. \end{figure} \end{xpl} -% \vspace{-0.5em} -% It is easy to associate a Markov Matrix $M$ to such a graph $G(f)$ -% as follows: - -% $M_{ij} = \frac{1}{n}$ if there is an edge from $i$ to $j$ in $\Gamma(f)$ and $i \neq j$; $M_{ii} = 1 - \sum\limits_{j=1, j\neq i}^n M_{ij}$; and $M_{ij} = 0$ otherwise. - -% \begin{xpl} -% The Markov matrix associated to the function $f^*$ is - -% \[ -% M=\dfrac{1}{3} \left( -% \begin{array}{llllllll} -% 1&1&1&0&0&0&0&0 \\ -% 1&1&0&0&0&1&0&0 \\ -% 0&0&1&1&0&0&1&0 \\ -% 0&1&1&1&0&0&0&0 \\ -% 1&0&0&0&1&0&1&0 \\ -% 0&0&0&0&1&1&0&1 \\ -% 0&0&0&0&1&0&1&1 \\ -% 0&0&0&1&0&1&0&1 -% \end{array} -% \right) -% \] -%\end{xpl} Let thus be given such kind of map. -This article focusses on studying its iterations according to +This article focuses on studying its iterations according to the equation~(\ref{eq:asyn}) with a given strategy. First of all, this can be interpreted as walking into its iteration graph where the choice of the edge to follow is decided by the strategy. Notice that the iteration graph is always a subgraph of -$n$-cube augemented with all the self-loop, \textit{i.e.}, all the +$n$-cube augmented with all the self-loop, \textit{i.e.}, all the edges $(v,v)$ for any $v \in \Bool^n$. Next, if we add probabilities on the transition graph, iterations can be interpreted as Markov chains. +\begin{xpl} +Let us consider for instance +the graph $\Gamma(f)$ defined +in \textsc{Figure~\ref{fig:iteration:f*}.} and +the probability function $p$ defined on the set of edges as follows: +$$ +p(e) \left\{ +\begin{array}{ll} += \frac{2}{3} \textrm{ if $e=(v,v)$ with $v \in \Bool^3$,}\\ += \frac{1}{6} \textrm{ otherwise.} +\end{array} +\right. +$$ +The matrix $P$ of the Markov chain associated to the function $f^*$ and to its probability function $p$ is +\[ +P=\dfrac{1}{6} \left( +\begin{array}{llllllll} +4&1&1&0&0&0&0&0 \\ +1&4&0&0&0&1&0&0 \\ +0&0&4&1&0&0&1&0 \\ +0&1&1&4&0&0&0&0 \\ +1&0&0&0&4&0&1&0 \\ +0&0&0&0&1&4&0&1 \\ +0&0&0&0&1&0&4&1 \\ +0&0&0&1&0&1&0&4 +\end{array} +\right) +\] +\end{xpl} - -Let $\pi$, $\mu$ be two distribution on a same set $\Omega$. The total -variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is -defined by -$$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ It is known that -$$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ Moreover, if -$\nu$ is a distribution on $\Omega$, one has -$$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$ - -Let $P$ be the matrix of a markov chain on $\Omega$. $P(x,\cdot)$ is the -distribution induced by the $x$-th row of $P$. If the markov chain induced by -$P$ has a stationary distribution $\pi$, then we define -$$d(t)=\max_{x\in\Omega}\tv{P^t(x,\cdot)-\pi},$$ -and - -$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ -One can prove that - -$$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$ - -It is known that $d(t+1)\leq d(t)$. - - - -Let $(X_t)_{t\in \mathbb{N}}$ be a sequence of $\Omega$ valued random -variables. A $\mathbb{N}$-valued random variable $\tau$ is a {\it stopping - time} for the sequence $(X_i)$ if for each $t$ there exists $B_t\subseteq -\omega^{t+1}$ such that $\{tau=t\}=\{(X_0,X_1,\ldots,X_t)\in B_t\}$. - -Let $(X_t)_{t\in \mathbb{N}}$ be a markov chain and $f(X_{t-1},Z_t)$ a -random mapping representation of the markov chain. A {\it randomized - stopping time} for the markov chain is a stopping time for -$(Z_t)_{t\in\mathbb{N}}$. It he markov chain is irreductible and has $\pi$ -as stationary distribution, then a {\it stationay time} $\tau$ is a -randomized stopping time (possibily depending on the starting position $x$), -such that the distribution of $X_\tau$ is $\pi$: -$$\P_x(X_\tau=y)=\pi(y).$$ - - -\JFC{Ou ceci a-t-il ete prouvé} -\begin{Theo} -If $\tau$ is a strong stationary time, then $d(t)\leq \max_{x\in\Omega} -\P_x(\tau > t)$. -\end{Theo} - -% Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, -% which is defined for two distributions $\pi$ and $\mu$ on the same set -% $\Omega$ by: -% $$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ -% It is known that -% $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ - -% Let then $M(x,\cdot)$ be the -% distribution induced by the $x$-th row of $M$. If the Markov chain -% induced by -% $M$ has a stationary distribution $\pi$, then we define -% $$d(t)=\max_{x\in\Omega}\tv{M^t(x,\cdot)-\pi}.$$ -Intuitively $d(t)$ is the largest deviation between -the distribution $\pi$ and $M^t(x,\cdot)$, which -is the result of iterating $t$ times the function. -Finally, let $\varepsilon$ be a positive number, the \emph{mixing time} -with respect to $\varepsilon$ is given by -$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ -It defines the smallest iteration number -that is sufficient to obtain a deviation lesser than $\varepsilon$. +% % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, +% % which is defined for two distributions $\pi$ and $\mu$ on the same set +% % $\Bool^n$ by: +% % $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$ +% % It is known that +% % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$ + +% % Let then $M(x,\cdot)$ be the +% % distribution induced by the $x$-th row of $M$. If the Markov chain +% % induced by +% % $M$ has a stationary distribution $\pi$, then we define +% % $$d(t)=\max_{x\in\Bool^n}\tv{M^t(x,\cdot)-\pi}.$$ +% Intuitively $d(t)$ is the largest deviation between +% the distribution $\pi$ and $M^t(x,\cdot)$, which +% is the result of iterating $t$ times the function. +% Finally, let $\varepsilon$ be a positive number, the \emph{mixing time} +% with respect to $\varepsilon$ is given by +% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ +% It defines the smallest iteration number +% that is sufficient to obtain a deviation lesser than $\varepsilon$. % Notice that the upper and lower bounds of mixing times cannot % directly be computed with eigenvalues formulae as expressed % in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work @@ -162,52 +122,4 @@ that is sufficient to obtain a deviation lesser than $\varepsilon$. -Let us finally present the pseudorandom number generator $\chi_{\textit{14Secrypt}}$ -which is based on random walks in $\Gamma(f)$. -More precisely, let be given a Boolean map $f:\Bool^n \rightarrow \Bool^n$, -a PRNG \textit{Random}, -an integer $b$ that corresponds to an awaited mixing time, and -an initial configuration $x^0$. -Starting from $x^0$, the algorithm repeats $b$ times -a random choice of which edge to follow and traverses this edge. -The final configuration is thus outputted. -This PRNG is formalized in Algorithm~\ref{CI Algorithm}. - - - -\vspace{-1em} -\begin{algorithm}[ht] -%\begin{scriptsize} -\KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$ ($n$ bits)} -\KwOut{a configuration $x$ ($n$ bits)} -$x\leftarrow x^0$\; -\For{$i=0,\dots,b-1$} -{ -$s\leftarrow{\textit{Random}(n)}$\; -$x\leftarrow{F_f(s,x)}$\; -} -return $x$\; -%\end{scriptsize} -\caption{Pseudo Code of the $\chi_{\textit{14Secrypt}}$ PRNG} -\label{CI Algorithm} -\end{algorithm} -\vspace{-0.5em} -This PRNG is a particularized version of Algorithm given in~\cite{BCGR11}. -Compared to this latter, the length of the random -walk of our algorithm is always constant (and is equal to $b$) whereas it -was given by a second PRNG in this latter. -However, all the theoretical results that are given in~\cite{BCGR11} remain -true since the proofs do not rely on this fact. - -Let $f: \Bool^{n} \rightarrow \Bool^{n}$. -It has been shown~\cite[Th. 4, p. 135]{BCGR11}} that -if its iteration graph is strongly connected, then -the output of $\chi_{\textit{14Secrypt}}$ follows -a law that tends to the uniform distribution -if and only if its Markov matrix is a doubly stochastic matrix. - -Let us now present a method to -generate functions -with Doubly Stochastic matrix and Strongly Connected iteration graph, - denoted as DSSC matrix.