From: Jean-François Couchot Date: Mon, 16 Feb 2015 14:04:20 +0000 (+0100) Subject: ajout de stopping X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rairo15.git/commitdiff_plain/586298b57c6e1cec5566bbe692ac21c4c701fd51?ds=inline ajout de stopping --- diff --git a/main.tex b/main.tex index 233d448..340e8d3 100644 --- a/main.tex +++ b/main.tex @@ -8,6 +8,22 @@ \usepackage{algorithm2e} +\usepackage[latin1]{inputenc} +\usepackage[T1]{fontenc} +\usepackage[english]{babel} +\usepackage{amsmath,amssymb,latexsym,eufrak,euscript} +\usepackage{pstricks,pst-node,pst-coil} + + +\usepackage{url,tikz} +\usepackage{pgflibrarysnakes} + +\usepackage{multicol} + +\usetikzlibrary{arrows} +\usetikzlibrary{automata} +\usetikzlibrary{snakes} +\usetikzlibrary{shapes} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% @@ -28,6 +44,16 @@ } } + + +\newcommand {\tv}[1] {\lVert #1 \rVert_{\rm TV}} +\def \top {1.8} +\def \topt {2.3} +\def \P {\mathbb{P}} +\def \ov {\overline} +\def \ts {\tau_{\rm stop}} + + % \theoremstyle{plain} % \theoremheaderfont{\normalfont\bfseries\sc} % \theorembodyfont{\slshape} @@ -95,9 +121,7 @@ may be updated at each iteration. At the theoretical level, we show that \input{preliminaries} \section{Stopping Time} -% Donner la borne du stopping time quand on marche dedans (nouveau). -% Énoncer le problème de la taille de cette borne (elle est certes finie, mais grande). - +\input{stopping} \section{Jumping in a specific $n$-cube} % Proposer alors les sauts dans ce n-cube (nouveau) diff --git a/preliminaries.tex b/preliminaries.tex new file mode 100644 index 0000000..6d837a1 --- /dev/null +++ b/preliminaries.tex @@ -0,0 +1,167 @@ + + + +In what follows, we consider the Boolean algebra on the set +$\Bool=\{0,1\}$ with the classical operators of conjunction '.', +of disjunction '+', of negation '$\overline{~}$', and of +disjunctive union $\oplus$. + +Let $n$ be a positive integer. A {\emph{Boolean map} $f$ is +a function from the Boolean domain + to itself +such that +$x=(x_1,\dots,x_n)$ maps to $f(x)=(f_1(x),\dots,f_n(x))$. +Functions are iterated as follows. +At the $t^{th}$ iteration, only the $s_{t}-$th component is +``iterated'', where $s = \left(s_t\right)_{t \in \mathds{N}}$ is a sequence of indices taken in $\llbracket 1;n \rrbracket$ called ``strategy''. Formally, +let $F_f: \llbracket1;n\rrbracket\times \Bool^{n}$ to $\Bool^n$ be defined by +\[ +F_f(i,x)=(x_1,\dots,x_{i-1},f_i(x),x_{i+1},\dots,x_n). +\] +Then, let $x^0\in\Bool^n$ be an initial configuration +and $s\in +\llbracket1;n\rrbracket^\Nats$ be a strategy, +the dynamics are described by the recurrence +\begin{equation}\label{eq:asyn} +x^{t+1}=F_f(s_t,x^t). +\end{equation} + + +Let be given a Boolean map $f$. Its associated +{\emph{iteration graph}} $\Gamma(f)$ is the +directed graph such that the set of vertices is +$\Bool^n$, and for all $x\in\Bool^n$ and $i\in \llbracket1;n\rrbracket$, +the graph $\Gamma(f)$ contains an arc from $x$ to $F_f(i,x)$. + +\begin{xpl} +Let us consider for instance $n=3$. +Let +$f^*: \Bool^3 \rightarrow \Bool^3$ be defined by + +$f^*(x_1,x_2,x_3) = +(x_2 \oplus x_3, \overline{x_1}\overline{x_3} + x_1\overline{x_2}, +\overline{x_1}\overline{x_3} + x_1x_2)$ +The iteration graph $\Gamma(f^*)$ of this function is given in +Figure~\ref{fig:iteration:f*}. + +\vspace{-1em} +\begin{figure}[ht] +\begin{center} +\includegraphics[scale=0.5]{images/iter_f0b} +\end{center} +\vspace{-0.5em} +\caption{Iteration Graph $\Gamma(f^*)$ of the function $f^*$}\label{fig:iteration:f*} +\end{figure} +\end{xpl} + +\vspace{-0.5em} +It is easy to associate a Markov Matrix $M$ to such a graph $G(f)$ +as follows: + +$M_{ij} = \frac{1}{n}$ if there is an edge from $i$ to $j$ in $\Gamma(f)$ and $i \neq j$; $M_{ii} = 1 - \sum\limits_{j=1, j\neq i}^n M_{ij}$; and $M_{ij} = 0$ otherwise. + +\begin{xpl} +The Markov matrix associated to the function $f^*$ is + +\[ +M=\dfrac{1}{3} \left( +\begin{array}{llllllll} +1&1&1&0&0&0&0&0 \\ +1&1&0&0&0&1&0&0 \\ +0&0&1&1&0&0&1&0 \\ +0&1&1&1&0&0&0&0 \\ +1&0&0&0&1&0&1&0 \\ +0&0&0&0&1&1&0&1 \\ +0&0&0&0&1&0&1&1 \\ +0&0&0&1&0&1&0&1 +\end{array} +\right) +\] + + + + + +\end{xpl} + + +It is usual to check whether rows of such kind of matrices +converge to a specific +distribution. +Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, +which is defined for two distributions $\pi$ and $\mu$ on the same set +$\Omega$ by: +$$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ +% It is known that +% $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ + +Let then $M(x,\cdot)$ be the +distribution induced by the $x$-th row of $M$. If the Markov chain +induced by +$M$ has a stationary distribution $\pi$, then we define +$$d(t)=\max_{x\in\Omega}\tv{M^t(x,\cdot)-\pi}.$$ +Intuitively $d(t)$ is the largest deviation between +the distribution $\pi$ and $M^t(x,\cdot)$, which +is the result of iterating $t$ times the function. +Finally, let $\varepsilon$ be a positive number, the \emph{mixing time} +with respect to $\varepsilon$ is given by +$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ +It defines the smallest iteration number +that is sufficient to obtain a deviation lesser than $\varepsilon$. +% Notice that the upper and lower bounds of mixing times cannot +% directly be computed with eigenvalues formulae as expressed +% in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work +% only consider reversible Markov matrices whereas we do no restrict our +% matrices to such a form. + + + +Let us finally present the pseudorandom number generator $\chi_{\textit{14Secrypt}}$ +which is based on random walks in $\Gamma(f)$. +More precisely, let be given a Boolean map $f:\Bool^n \rightarrow \Bool^n$, +a PRNG \textit{Random}, +an integer $b$ that corresponds to an awaited mixing time, and +an initial configuration $x^0$. +Starting from $x^0$, the algorithm repeats $b$ times +a random choice of which edge to follow and traverses this edge. +The final configuration is thus outputted. +This PRNG is formalized in Algorithm~\ref{CI Algorithm}. + + + +\vspace{-1em} +\begin{algorithm}[ht] +%\begin{scriptsize} +\KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$ ($n$ bits)} +\KwOut{a configuration $x$ ($n$ bits)} +$x\leftarrow x^0$\; +\For{$i=0,\dots,b-1$} +{ +$s\leftarrow{\textit{Random}(n)}$\; +$x\leftarrow{F_f(s,x)}$\; +} +return $x$\; +%\end{scriptsize} +\caption{Pseudo Code of the $\chi_{\textit{14Secrypt}}$ PRNG} +\label{CI Algorithm} +\end{algorithm} +\vspace{-0.5em} +This PRNG is a particularized version of Algorithm given in~\cite{BCGR11}. +Compared to this latter, the length of the random +walk of our algorithm is always constant (and is equal to $b$) whereas it +was given by a second PRNG in this latter. +However, all the theoretical results that are given in~\cite{BCGR11} remain +true since the proofs do not rely on this fact. + +Let $f: \Bool^{n} \rightarrow \Bool^{n}$. +It has been shown~\cite[Th. 4, p. 135]{BCGR11}} that +if its iteration graph is strongly connected, then +the output of $\chi_{\textit{14Secrypt}}$ follows +a law that tends to the uniform distribution +if and only if its Markov matrix is a doubly stochastic matrix. + +Let us now present a method to +generate functions +with Doubly Stochastic matrix and Strongly Connected iteration graph, + denoted as DSSC matrix. + diff --git a/stopping.tex b/stopping.tex new file mode 100644 index 0000000..5bf7425 --- /dev/null +++ b/stopping.tex @@ -0,0 +1,280 @@ +\documentclass{article} +%\usepackage{prentcsmacro} +%\sloppy +\usepackage[a4paper]{geometry} +\geometry{hmargin=3cm, vmargin=3cm } + +\usepackage[latin1]{inputenc} +\usepackage[T1]{fontenc} +\usepackage[english]{babel} +\usepackage{amsmath,amssymb,latexsym,eufrak,euscript} +\usepackage{subfigure,pstricks,pst-node,pst-coil} + + +\usepackage{url,tikz} +\usepackage{pgflibrarysnakes} + +\usepackage{multicol} + +\usetikzlibrary{arrows} +\usetikzlibrary{automata} +\usetikzlibrary{snakes} +\usetikzlibrary{shapes} + +%% \setlength{\oddsidemargin}{15mm} +%% \setlength{\evensidemargin}{15mm} \setlength{\textwidth}{140mm} +%% \setlength{\textheight}{219mm} \setlength{\topmargin}{5mm} +\newtheorem{theorem}{Theorem} +%\newtheorem{definition}[theorem]{Definition} +% %\newtheorem{defis}[thm]{D\'efinitions} + \newtheorem{example}[theorem]{Example} +% %\newtheorem{Exes}[thm]{Exemples} +\newtheorem{lemma}[theorem]{Lemma} +\newtheorem{proposition}[theorem]{Proposition} +\newtheorem{construction}[theorem]{Construction} +\newtheorem{corollary}[theorem]{Corollary} +% \newtheorem{algor}[thm]{Algorithm} +%\newtheorem{propdef}[thm]{Proposition-D\'efinition} +\newcommand{\mlabel}[1]{\label{#1}\marginpar{\fbox{#1}}} +\newcommand{\flsup}[1]{\stackrel{#1}{\longrightarrow}} + +\newcommand{\stirlingtwo}[2]{\genfrac{\lbrace}{\rbrace}{0pt}{}{#1}{#2}} +\newcommand{\stirlingone}[2]{\genfrac{\lbrack}{\rbrack}{0pt}{}{#1}{#2}} + +\newenvironment{algo} +{ \vspace{1em} +\begin{algor}\mbox +\newline \vspace{-0.1em} +\begin{quote}\begin{rm}} +{\end{rm}\end{quote}\end{algor}\vspace{-1.5em}\vspace{2em}} +%\null \hfill $\diamondsuit$ \par\medskip \vspace{1em}} + +\newenvironment{exe} +{\begin{example}\rm } +{\end{example} +%\vspace*{-1.5em} +%\null \hfill $\triangledown$ \par\medskip} +%\null \hfill $\triangledown$ \par\medskip \vspace{1em}} +} + + +\newenvironment{proof} +{ \noindent {\sc Proof.\/} } +{\null \hfill $\Box$ \par\medskip \vspace{1em}} + + + +\newcommand {\tv}[1] {\lVert #1 \rVert_{\rm TV}} +\def \top {1.8} +\def \topt {2.3} +\def \P {\mathbb{P}} +\def \ov {\overline} +\def \ts {\tau_{\rm stop}} +\begin{document} +\label{firstpage} + +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\section{Mathematical Backgroung} + + + +Let $\pi$, $\mu$ be two distribution on a same set $\Omega$. The total +variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is +defined by +$$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ It is known that +$$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ Moreover, if +$\nu$ is a distribution on $\Omega$, one has +$$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$ + +Let $P$ be the matrix of a markov chain on $\Omega$. $P(x,\cdot)$ is the +distribution induced by the $x$-th row of $P$. If the markov chain induced by +$P$ has a stationary distribution $\pi$, then we define +$$d(t)=\max_{x\in\Omega}\tv{P^t(x,\cdot)-\pi},$$ +and + +$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ +One can prove that + +$$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$ + +It is known that $d(t+1)\leq d(t)$. + + +%% A {\it coupling} with transition matrix $P$ is a process $(X_t,Y_t)_{t\geq 0}$ +%% such that both $(X_t)$ and $(Y_t)$ are markov chains of matric $P$; moreover +%% it is required that if $X_s=Y_s$, then for any $t\geq s$, $X_t=Y_t$. +%% A results provides that if $(X_t,Y_t)_{t\geq 0}$ is a coupling, then +%% $$d(t)\leq \max_{x,y} P_{x,y}(\{\tau_{\rm couple} \geq t\}),$$ +%% with $\tau_{\rm couple}=\min_t\{X_t=Y_t\}$. + + +Let $(X_t)_{t\in \mathbb{N}}$ be a sequence of $\Omega$ valued random +variables. A $\mathbb{N}$-valued random variable $\tau$ is a {\it stopping + time} for the sequence $(X_i)$ if for each $t$ there exists $B_t\subseteq +\omega^{t+1}$ such that $\{tau=t\}=\{(X_0,X_1,\ldots,X_t)\in B_t\}$. + +Let $(X_t)_{t\in \mathbb{N}}$ be a markov chain and $f(X_{t-1},Z_t)$ a +random mapping representation of the markov chain. A {\it randomized + stopping time} for the markov chain is a stopping time for +$(Z_t)_{t\in\mathbb{N}}$. It he markov chain is irreductible and has $\pi$ +as stationary distribution, then a {\it stationay time} $\tau$ is a +randomized stopping time (possibily depending on the starting position $x$), +such that the distribution of $X_\tau$ is $\pi$: +$$\P_x(X_\tau=y)=\pi(y).$$ + +\begin{proposition} +If $\tau$ is a strong stationary time, then $d(t)\leq \max_{x\in\Omega} +\P_x(\tau > t)$. +\end{proposition} +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +\section{Random walk on the modified Hypercube} + + +Let $\Omega=\{0,1\}^N$ be the set of words of length $N$. Let $E=\{(x,y)\mid +x\in \Omega, y\in \Omega,\ x=y \text{ or } x\oplus y \in 0^*10^*\}$. Let $h$ +be a function from $\Omega$ into $\{1,\ldots,N\}$. + +We denote by $E_h$ the set $E\setminus\{(x,y)\mid x\oplus y = +0^{N-h(x)}10^{h(x)-1}\}$. We define the matrix $P_h$ has follows: +$$\left\{ +\begin{array}{ll} +P_h(x,y)=0 & \text{ if } (x,y)\notin E_h\\ +P_h(x,x)=\frac{1}{2}+\frac{1}{2N} & \\ +P_h(x,x)=\frac{1}{2N} & \text{otherwise}\\ + +\end{array} +\right. +$$ + +We denote by $\ov{h}$ the function from $\Omega$ into $\omega$ defined +by $x\oplus\ov{h}(x)=0^{N-h(x)}10^{h(x)-1}.$ +The function $\ov{h}$ is said {\it square-free} if for every $x\in E$, +$\ov{h}(\ov{h}(x))\neq x$. + +\begin{lemma}\label{lm:h} +If $\ov{h}$ is bijective and square-free, then $h(\ov{h}^{-1}(x))\neq h(x)$. +\end{lemma} + +\begin{proof} + +\end{proof} + +Let $Z$ be a random variable over +$\{1,\ldots,N\}\times\{0,1\}$ uniformaly distributed. For $X\in \Omega$, we +define, with $Z=(i,x)$, +$$ +\left\{ +\begin{array}{ll} +f(X,Z)=X\oplus (0^{N-i}10^{i-1}) & \text{if } x=1 \text{ and } i\neq h(X),\\ +f(X,Z)=X& \text{otherwise.} +\end{array}\right. +$$ + +The pair $f,Z$ is a random mapping representation of $P_h$. + + + + +%%%%%%%%%%%%%%%%%%%%%%%%%%%ù +\section{Stopping time} + +An integer $\ell\in\{1,\ldots,N\}$ is said {\it fair} at time $t$ if there +exists $0\leq j