X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rairo15.git/blobdiff_plain/f18e5003b9da99d6565d366a1a9d56f8d893e5d9..3820ec6286eab552baad3b3d7c0451e1bbc42482:/preliminaries.tex?ds=inline diff --git a/preliminaries.tex b/preliminaries.tex index 8f7774a..9441aab 100644 --- a/preliminaries.tex +++ b/preliminaries.tex @@ -9,8 +9,9 @@ to itself such that $x=(x_1,\dots,x_n)$ maps to $f(x)=(f_1(x),\dots,f_n(x))$. Functions are iterated as follows. -At the $t^{th}$ iteration, only the $s_{t}-$th component is -``iterated'', where $s = \left(s_t\right)_{t \in \mathds{N}}$ is a sequence of indices taken in $\llbracket 1;n \rrbracket$ called ``strategy''. Formally, +At the $t^{th}$ iteration, only the $s_{t}-$th component is said to be +``iterated'', where $s = \left(s_t\right)_{t \in \mathds{N}}$ is a sequence of indices taken in $\llbracket 1;n \rrbracket$ called ``strategy''. +Formally, let $F_f: \llbracket1;n\rrbracket\times \Bool^{n}$ to $\Bool^n$ be defined by \[ F_f(i,x)=(x_1,\dots,x_{i-1},f_i(x),x_{i+1},\dots,x_n). @@ -50,55 +51,69 @@ Figure~\ref{fig:iteration:f*}. \end{figure} \end{xpl} -% \vspace{-0.5em} -% It is easy to associate a Markov Matrix $M$ to such a graph $G(f)$ -% as follows: -% $M_{ij} = \frac{1}{n}$ if there is an edge from $i$ to $j$ in $\Gamma(f)$ and $i \neq j$; $M_{ii} = 1 - \sum\limits_{j=1, j\neq i}^n M_{ij}$; and $M_{ij} = 0$ otherwise. - -% \begin{xpl} -% The Markov matrix associated to the function $f^*$ is - -% \[ -% M=\dfrac{1}{3} \left( -% \begin{array}{llllllll} -% 1&1&1&0&0&0&0&0 \\ -% 1&1&0&0&0&1&0&0 \\ -% 0&0&1&1&0&0&1&0 \\ -% 0&1&1&1&0&0&0&0 \\ -% 1&0&0&0&1&0&1&0 \\ -% 0&0&0&0&1&1&0&1 \\ -% 0&0&0&0&1&0&1&1 \\ -% 0&0&0&1&0&1&0&1 -% \end{array} -% \right) -% \] -%\end{xpl} +Let thus be given such kind of map. +This article focuses on studying its iterations according to +the equation~(\ref{eq:asyn}) with a given strategy. +First of all, this can be interpreted as walking into its iteration graph +where the choice of the edge to follow is decided by the strategy. +Notice that the iteration graph is always a subgraph of +$n$-cube augmented with all the self-loop, \textit{i.e.}, all the +edges $(v,v)$ for any $v \in \Bool^n$. +Next, if we add probabilities on the transition graph, iterations can be +interpreted as Markov chains. +\begin{xpl} +Let us consider for instance +the graph $\Gamma(f)$ defined +in \textsc{Figure~\ref{fig:iteration:f*}.} and +the probability function $p$ defined on the set of edges as follows: +$$ +p(e) \left\{ +\begin{array}{ll} += \frac{2}{3} \textrm{ if $e=(v,v)$ with $v \in \Bool^3$,}\\ += \frac{1}{6} \textrm{ otherwise.} +\end{array} +\right. +$$ +The matrix $P$ of the Markov chain associated to the function $f^*$ and to its probability function $p$ is +\[ +P=\dfrac{1}{6} \left( +\begin{array}{llllllll} +4&1&1&0&0&0&0&0 \\ +1&4&0&0&0&1&0&0 \\ +0&0&4&1&0&0&1&0 \\ +0&1&1&4&0&0&0&0 \\ +1&0&0&0&4&0&1&0 \\ +0&0&0&0&1&4&0&1 \\ +0&0&0&0&1&0&4&1 \\ +0&0&0&1&0&1&0&4 +\end{array} +\right) +\] +\end{xpl} -It is usual to check whether rows of such kind of matrices -converge to a specific -distribution. -Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, -which is defined for two distributions $\pi$ and $\mu$ on the same set -$\Omega$ by: -$$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ -% It is known that -% $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ -Let then $M(x,\cdot)$ be the -distribution induced by the $x$-th row of $M$. If the Markov chain -induced by -$M$ has a stationary distribution $\pi$, then we define -$$d(t)=\max_{x\in\Omega}\tv{M^t(x,\cdot)-\pi}.$$ -Intuitively $d(t)$ is the largest deviation between -the distribution $\pi$ and $M^t(x,\cdot)$, which -is the result of iterating $t$ times the function. -Finally, let $\varepsilon$ be a positive number, the \emph{mixing time} -with respect to $\varepsilon$ is given by -$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ -It defines the smallest iteration number -that is sufficient to obtain a deviation lesser than $\varepsilon$. +% % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, +% % which is defined for two distributions $\pi$ and $\mu$ on the same set +% % $\Bool^n$ by: +% % $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$ +% % It is known that +% % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$ + +% % Let then $M(x,\cdot)$ be the +% % distribution induced by the $x$-th row of $M$. If the Markov chain +% % induced by +% % $M$ has a stationary distribution $\pi$, then we define +% % $$d(t)=\max_{x\in\Bool^n}\tv{M^t(x,\cdot)-\pi}.$$ +% Intuitively $d(t)$ is the largest deviation between +% the distribution $\pi$ and $M^t(x,\cdot)$, which +% is the result of iterating $t$ times the function. +% Finally, let $\varepsilon$ be a positive number, the \emph{mixing time} +% with respect to $\varepsilon$ is given by +% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ +% It defines the smallest iteration number +% that is sufficient to obtain a deviation lesser than $\varepsilon$. % Notice that the upper and lower bounds of mixing times cannot % directly be computed with eigenvalues formulae as expressed % in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work @@ -107,52 +122,4 @@ that is sufficient to obtain a deviation lesser than $\varepsilon$. -Let us finally present the pseudorandom number generator $\chi_{\textit{14Secrypt}}$ -which is based on random walks in $\Gamma(f)$. -More precisely, let be given a Boolean map $f:\Bool^n \rightarrow \Bool^n$, -a PRNG \textit{Random}, -an integer $b$ that corresponds to an awaited mixing time, and -an initial configuration $x^0$. -Starting from $x^0$, the algorithm repeats $b$ times -a random choice of which edge to follow and traverses this edge. -The final configuration is thus outputted. -This PRNG is formalized in Algorithm~\ref{CI Algorithm}. - - - -\vspace{-1em} -\begin{algorithm}[ht] -%\begin{scriptsize} -\KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$ ($n$ bits)} -\KwOut{a configuration $x$ ($n$ bits)} -$x\leftarrow x^0$\; -\For{$i=0,\dots,b-1$} -{ -$s\leftarrow{\textit{Random}(n)}$\; -$x\leftarrow{F_f(s,x)}$\; -} -return $x$\; -%\end{scriptsize} -\caption{Pseudo Code of the $\chi_{\textit{14Secrypt}}$ PRNG} -\label{CI Algorithm} -\end{algorithm} -\vspace{-0.5em} -This PRNG is a particularized version of Algorithm given in~\cite{BCGR11}. -Compared to this latter, the length of the random -walk of our algorithm is always constant (and is equal to $b$) whereas it -was given by a second PRNG in this latter. -However, all the theoretical results that are given in~\cite{BCGR11} remain -true since the proofs do not rely on this fact. - -Let $f: \Bool^{n} \rightarrow \Bool^{n}$. -It has been shown~\cite[Th. 4, p. 135]{BCGR11}} that -if its iteration graph is strongly connected, then -the output of $\chi_{\textit{14Secrypt}}$ follows -a law that tends to the uniform distribution -if and only if its Markov matrix is a doubly stochastic matrix. - -Let us now present a method to -generate functions -with Doubly Stochastic matrix and Strongly Connected iteration graph, - denoted as DSSC matrix.