X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rairo15.git/blobdiff_plain/a5af617983aac5c657a64b171efdd174585433db..27e8ee1ca6c3c40de91b88b94ad017256c5398b0:/preliminaries.tex?ds=inline diff --git a/preliminaries.tex b/preliminaries.tex index f90fdae..3b559b1 100644 --- a/preliminaries.tex +++ b/preliminaries.tex @@ -9,8 +9,9 @@ to itself such that $x=(x_1,\dots,x_n)$ maps to $f(x)=(f_1(x),\dots,f_n(x))$. Functions are iterated as follows. -At the $t^{th}$ iteration, only the $s_{t}-$th component is -``iterated'', where $s = \left(s_t\right)_{t \in \mathds{N}}$ is a sequence of indices taken in $\llbracket 1;n \rrbracket$ called ``strategy''. Formally, +At the $t^{th}$ iteration, only the $s_{t}-$th component is said to be +``iterated'', where $s = \left(s_t\right)_{t \in \mathds{N}}$ is a sequence of indices taken in $\llbracket 1;n \rrbracket$ called ``strategy''. +Formally, let $F_f: \llbracket1;n\rrbracket\times \Bool^{n}$ to $\Bool^n$ be defined by \[ F_f(i,x)=(x_1,\dots,x_{i-1},f_i(x),x_{i+1},\dots,x_n). @@ -34,71 +35,117 @@ the graph $\Gamma(f)$ contains an arc from $x$ to $F_f(i,x)$. Let us consider for instance $n=3$. Let $f^*: \Bool^3 \rightarrow \Bool^3$ be defined by - $f^*(x_1,x_2,x_3) = (x_2 \oplus x_3, \overline{x_1}\overline{x_3} + x_1\overline{x_2}, -\overline{x_1}\overline{x_3} + x_1x_2)$ - - +\overline{x_1}\overline{x_3} + x_1x_2)$. The iteration graph $\Gamma(f^*)$ of this function is given in Figure~\ref{fig:iteration:f*}. \vspace{-1em} \begin{figure}[ht] \begin{center} -\includegraphics[scale=0.5]{images/iter_f0c.eps} +\includegraphics[scale=0.5]{images/iter_f0c} \end{center} \vspace{-0.5em} \caption{Iteration Graph $\Gamma(f^*)$ of the function $f^*$}\label{fig:iteration:f*} \end{figure} \end{xpl} -\vspace{-0.5em} -It is easy to associate a Markov Matrix $M$ to such a graph $G(f)$ -as follows: +% \vspace{-0.5em} +% It is easy to associate a Markov Matrix $M$ to such a graph $G(f)$ +% as follows: + +% $M_{ij} = \frac{1}{n}$ if there is an edge from $i$ to $j$ in $\Gamma(f)$ and $i \neq j$; $M_{ii} = 1 - \sum\limits_{j=1, j\neq i}^n M_{ij}$; and $M_{ij} = 0$ otherwise. + +% \begin{xpl} +% The Markov matrix associated to the function $f^*$ is + +% \[ +% M=\dfrac{1}{3} \left( +% \begin{array}{llllllll} +% 1&1&1&0&0&0&0&0 \\ +% 1&1&0&0&0&1&0&0 \\ +% 0&0&1&1&0&0&1&0 \\ +% 0&1&1&1&0&0&0&0 \\ +% 1&0&0&0&1&0&1&0 \\ +% 0&0&0&0&1&1&0&1 \\ +% 0&0&0&0&1&0&1&1 \\ +% 0&0&0&1&0&1&0&1 +% \end{array} +% \right) +% \] +%\end{xpl} + +Let thus be given such kind of map. +This article focusses on studying its iterations according to +the equation~(\ref{eq:asyn}) with a given strategy. +First of all, this can be interpreted as walking into its iteration graph +where the choice of the edge to follow is decided by the strategy. +Notice that the iteration graph is always a subgraph of +$n$-cube augemented with all the self-loop, \textit{i.e.}, all the +edges $(v,v)$ for any $v \in \Bool^n$. +Next, if we add probabilities on the transition graph, iterations can be +interpreted as Markov chains. + + + + +Let $\pi$, $\mu$ be two distribution on a same set $\Omega$. The total +variation distance between $\pi$ and $\mu$ is denoted $\tv{\pi-\mu}$ and is +defined by +$$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ It is known that +$$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ Moreover, if +$\nu$ is a distribution on $\Omega$, one has +$$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$ + +Let $P$ be the matrix of a markov chain on $\Omega$. $P(x,\cdot)$ is the +distribution induced by the $x$-th row of $P$. If the markov chain induced by +$P$ has a stationary distribution $\pi$, then we define +$$d(t)=\max_{x\in\Omega}\tv{P^t(x,\cdot)-\pi},$$ +and + +$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ +One can prove that -$M_{ij} = \frac{1}{n}$ if there is an edge from $i$ to $j$ in $\Gamma(f)$ and $i \neq j$; $M_{ii} = 1 - \sum\limits_{j=1, j\neq i}^n M_{ij}$; and $M_{ij} = 0$ otherwise. +$$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$ -\begin{xpl} -The Markov matrix associated to the function $f^*$ is +It is known that $d(t+1)\leq d(t)$. -\[ -M=\dfrac{1}{3} \left( -\begin{array}{llllllll} -1&1&1&0&0&0&0&0 \\ -1&1&0&0&0&1&0&0 \\ -0&0&1&1&0&0&1&0 \\ -0&1&1&1&0&0&0&0 \\ -1&0&0&0&1&0&1&0 \\ -0&0&0&0&1&1&0&1 \\ -0&0&0&0&1&0&1&1 \\ -0&0&0&1&0&1&0&1 -\end{array} -\right) -\] +Let $(X_t)_{t\in \mathbb{N}}$ be a sequence of $\Omega$ valued random +variables. A $\mathbb{N}$-valued random variable $\tau$ is a {\it stopping + time} for the sequence $(X_i)$ if for each $t$ there exists $B_t\subseteq +\omega^{t+1}$ such that $\{tau=t\}=\{(X_0,X_1,\ldots,X_t)\in B_t\}$. - +Let $(X_t)_{t\in \mathbb{N}}$ be a markov chain and $f(X_{t-1},Z_t)$ a +random mapping representation of the markov chain. A {\it randomized + stopping time} for the markov chain is a stopping time for +$(Z_t)_{t\in\mathbb{N}}$. It he markov chain is irreductible and has $\pi$ +as stationary distribution, then a {\it stationay time} $\tau$ is a +randomized stopping time (possibily depending on the starting position $x$), +such that the distribution of $X_\tau$ is $\pi$: +$$\P_x(X_\tau=y)=\pi(y).$$ -\end{xpl} +\JFC{Ou ceci a-t-il ete prouvé} +\begin{Theo} +If $\tau$ is a strong stationary time, then $d(t)\leq \max_{x\in\Omega} +\P_x(\tau > t)$. +\end{Theo} -It is usual to check whether rows of such kind of matrices -converge to a specific -distribution. -Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, -which is defined for two distributions $\pi$ and $\mu$ on the same set -$\Omega$ by: -$$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ +% Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, +% which is defined for two distributions $\pi$ and $\mu$ on the same set +% $\Omega$ by: +% $$\tv{\pi-\mu}=\max_{A\subset \Omega} |\pi(A)-\mu(A)|.$$ % It is known that % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Omega}|\pi(x)-\mu(x)|.$$ -Let then $M(x,\cdot)$ be the -distribution induced by the $x$-th row of $M$. If the Markov chain -induced by -$M$ has a stationary distribution $\pi$, then we define -$$d(t)=\max_{x\in\Omega}\tv{M^t(x,\cdot)-\pi}.$$ +% Let then $M(x,\cdot)$ be the +% distribution induced by the $x$-th row of $M$. If the Markov chain +% induced by +% $M$ has a stationary distribution $\pi$, then we define +% $$d(t)=\max_{x\in\Omega}\tv{M^t(x,\cdot)-\pi}.$$ Intuitively $d(t)$ is the largest deviation between the distribution $\pi$ and $M^t(x,\cdot)$, which is the result of iterating $t$ times the function.