X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rairo15.git/blobdiff_plain/3820ec6286eab552baad3b3d7c0451e1bbc42482..70b2c7214ae3421e58d225cbbb21fafbebcd8acc:/stopping.tex diff --git a/stopping.tex b/stopping.tex index 567ab01..0ad3e8a 100644 --- a/stopping.tex +++ b/stopping.tex @@ -1,3 +1,80 @@ + + + +Let thus be given such kind of map. +This article focuses on studying its iterations according to +the equation~(\ref{eq:asyn}) with a given strategy. +First of all, this can be interpreted as walking into its iteration graph +where the choice of the edge to follow is decided by the strategy. +Notice that the iteration graph is always a subgraph of +${\mathsf{N}}$-cube augmented with all the self-loop, \textit{i.e.}, all the +edges $(v,v)$ for any $v \in \Bool^{\mathsf{N}}$. +Next, if we add probabilities on the transition graph, iterations can be +interpreted as Markov chains. + +\begin{xpl} +Let us consider for instance +the graph $\Gamma(f)$ defined +in \textsc{Figure~\ref{fig:iteration:f*}.} and +the probability function $p$ defined on the set of edges as follows: +$$ +p(e) \left\{ +\begin{array}{ll} += \frac{2}{3} \textrm{ if $e=(v,v)$ with $v \in \Bool^3$,}\\ += \frac{1}{6} \textrm{ otherwise.} +\end{array} +\right. +$$ +The matrix $P$ of the Markov chain associated to the function $f^*$ and to its probability function $p$ is +\[ +P=\dfrac{1}{6} \left( +\begin{array}{llllllll} +4&1&1&0&0&0&0&0 \\ +1&4&0&0&0&1&0&0 \\ +0&0&4&1&0&0&1&0 \\ +0&1&1&4&0&0&0&0 \\ +1&0&0&0&4&0&1&0 \\ +0&0&0&0&1&4&0&1 \\ +0&0&0&0&1&0&4&1 \\ +0&0&0&1&0&1&0&4 +\end{array} +\right) +\] +\end{xpl} + + +% % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, +% % which is defined for two distributions $\pi$ and $\mu$ on the same set +% % $\Bool^n$ by: +% % $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$ +% % It is known that +% % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$ + +% % Let then $M(x,\cdot)$ be the +% % distribution induced by the $x$-th row of $M$. If the Markov chain +% % induced by +% % $M$ has a stationary distribution $\pi$, then we define +% % $$d(t)=\max_{x\in\Bool^n}\tv{M^t(x,\cdot)-\pi}.$$ +% Intuitively $d(t)$ is the largest deviation between +% the distribution $\pi$ and $M^t(x,\cdot)$, which +% is the result of iterating $t$ times the function. +% Finally, let $\varepsilon$ be a positive number, the \emph{mixing time} +% with respect to $\varepsilon$ is given by +% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ +% It defines the smallest iteration number +% that is sufficient to obtain a deviation lesser than $\varepsilon$. +% Notice that the upper and lower bounds of mixing times cannot +% directly be computed with eigenvalues formulae as expressed +% in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work +% only consider reversible Markov matrices whereas we do no restrict our +% matrices to such a form. + + + + + + + This section considers functions $f: \Bool^n \rightarrow \Bool^n $ issued from an hypercube where an Hamiltonian path has been removed. A specific random walk in this modified hypercube is first @@ -17,12 +94,23 @@ $$\tv{\pi-\mu}=\frac{1}{2}\sum_{X\in\Bool^n}|\pi(X)-\mu(X)|.$$ Moreover, if $\nu$ is a distribution on $\Bool^n$, one has $$\tv{\pi-\mu}\leq \tv{\pi-\nu}+\tv{\nu-\mu}$$ -Let $P$ be the matrix of a Markov chain on $\Bool^n$. $P(x,\cdot)$ is the -distribution induced by the $x$-th row of $P$. If the Markov chain induced by +Let $P$ be the matrix of a Markov chain on $\Bool^n$. $P(X,\cdot)$ is the +distribution induced by the $X$-th row of $P$. If the Markov chain induced by $P$ has a stationary distribution $\pi$, then we define $$d(t)=\max_{X\in\Bool^n}\tv{P^t(X,\cdot)-\pi}.$$ -It is known that $d(t+1)\leq d(t)$. \JFC{references ? Cela a-t-il -un intérêt dans la preuve ensuite.} + +and + +$$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ +One can prove that + +$$t_{\rm mix}(\varepsilon)\leq \lceil\log_2(\varepsilon^{-1})\rceil t_{\rm mix}(\frac{1}{4})$$ + + + + +% It is known that $d(t+1)\leq d(t)$. \JFC{references ? Cela a-t-il +% un intérêt dans la preuve ensuite.} @@ -41,8 +129,6 @@ In other words, the event $\{\tau = t \}$ only depends on the values of $(X_0,X_1,\ldots,X_t)$, not on $X_k$ with $k > t$. -\JFC{Je ne comprends pas la definition de randomized stopping time, Peut-on enrichir ?} - Let $(X_t)_{t\in \mathbb{N}}$ be a Markov chain and $f(X_{t-1},Z_t)$ a random mapping representation of the Markov chain. A {\it randomized stopping time} for the Markov chain is a stopping time for @@ -53,7 +139,6 @@ such that the distribution of $X_\tau$ is $\pi$: $$\P_X(X_\tau=Y)=\pi(Y).$$ -\JFC{Ou ceci a-t-il ete prouvé. On ne définit pas ce qu'est un strong stationary time.} \begin{Theo} If $\tau$ is a strong stationary time, then $d(t)\leq \max_{X\in\Bool^n} \P_X(\tau > t)$. @@ -128,9 +213,6 @@ $$ X_t= f(X_{t-1},Z_t) $$ -The pair $(f,Z)$ is a random mapping representation of $P_h$. -\JFC{interet de la phrase precedente} - %%%%%%%%%%%%%%%%%%%%%%%%%%%ù @@ -161,10 +243,12 @@ $b=1$ with probability $\frac{1}{2}$ and $b=0$ with probability $\frac{1}{2}$. Since $h(X_{\tau_\ell-1})\neq\ell$ the value of the $\ell$-th bit of $X_{\tau_\ell}$ is $0$ or $1$ with the same probability ($\frac{1}{2}$). -By symmetry, \JFC{Je ne comprends pas ce by symetry} for $t\geq \tau_\ell$, the + + Moving next in the chain, at each step, +the $l$-th bit is switched from $0$ to $1$ or from $1$ to $0$ each time with +the same probability. Therefore, for $t\geq \tau_\ell$, the $\ell$-th bit of $X_t$ is $0$ or $1$ with the same probability, proving the -lemma. -\end{proof} +lemma.\end{proof} \begin{Theo} \label{prop:stop} If $\ov{h}$ is bijective and square-free, then @@ -176,39 +260,56 @@ let $S_{X,\ell}$ be the random variable that counts the number of steps from $X$ until we reach a configuration where $\ell$ is fair. More formally -$$S_{X,\ell}=\min \{t \geq 1\mid h(X_{t-1})\neq \ell\text{ and }Z_t=(\ell,\.)\text{ and } X_0=X\}.$$ +$$S_{X,\ell}=\min \{t \geq 1\mid h(X_{t-1})\neq \ell\text{ and }Z_t=(\ell,.)\text{ and } X_0=X\}.$$ We denote by $$\lambda_h=\max_{X,\ell} S_{X,\ell}.$$ \begin{Lemma}\label{prop:lambda} -If $\ov{h}$ is a square-free bijective function, then one has $E[\lambda_h]\leq 8n^2.$ +If $\ov{h}$ is a square-free bijective function, then the inequality +$E[\lambda_h]\leq 8n^2$ is established. + \end{Lemma} \begin{proof} For every $X$, every $\ell$, one has $\P(S_{X,\ell})\leq 2)\geq -\frac{1}{4n^2}$. Indeed, if $h(X)\neq \ell$, then -$\P(S_{X,\ell}=1)=\frac{1}{2n}\geq \frac{1}{4n^2}$. If $h(X)=\ell$, then -$\P(S_{X,\ell}=1)=0$. Let $X_0=X$. Since $\ov{h}$ is square-free, -$\ov{h}(\ov{h}^{-1}(X))\neq X$. It follows that $(X,\ov{h}^{-1}(X))\in E_h$. -Therefore $P(X_1=\ov{h}^{-1}(X))=\frac{1}{2n}$. now, -by Lemma~\ref{lm:h}, $h(\ov{h}^{-1}(X))\neq h(X)$. Therefore -$\P(S_{X,\ell}=2\mid X_1=\ov{h}^{-1}(X))=\frac{1}{2n}$, proving that -$\P(S_{X,\ell}\leq 2)\geq \frac{1}{4n^2}$. - -Therefore, $\P(S_{X,\ell}\geq 2)\leq 1-\frac{1}{4n^2}$. By induction, one +\frac{1}{4n^2}$. +Let $X_0= X$. +Indeed, +\begin{itemize} +\item if $h(X)\neq \ell$, then +$\P(S_{X,\ell}=1)=\frac{1}{2n}\geq \frac{1}{4n^2}$. +\item otherwise, $h(X)=\ell$, then +$\P(S_{X,\ell}=1)=0$. +But in this case, intutively, it is possible to move +from $X$ to $\ov{h}^{-1}(X)$ (with probability $\frac{1}{2N}$). And in +$\ov{h}^{-1}(X)$ the $l$-th bit can be switched. +More formally, +since $\ov{h}$ is square-free, +$\ov{h}(X)=\ov{h}(\ov{h}(\ov{h}^{-1}(X)))\neq \ov{h}^{-1}(X)$. It follows +that $(X,\ov{h}^{-1}(X))\in E_h$. We thus have +$P(X_1=\ov{h}^{-1}(X))=\frac{1}{2N}$. Now, by Lemma~\ref{lm:h}, +$h(\ov{h}^{-1}(X))\neq h(X)$. Therefore $\P(S_{x,\ell}=2\mid +X_1=\ov{h}^{-1}(X))=\frac{1}{2N}$, proving that $\P(S_{x,\ell}\leq 2)\geq +\frac{1}{4N^2}$. +\end{itemize} + + + + +Therefore, $\P(S_{X,\ell}\geq 3)\leq 1-\frac{1}{4n^2}$. By induction, one has, for every $i$, $\P(S_{X,\ell}\geq 2i)\leq \left(1-\frac{1}{4n^2}\right)^i$. Moreover, -since $S_{X,\ell}$ is positive, it is known~\cite[lemma 2.9]{}, that +since $S_{X,\ell}$ is positive, it is known~\cite[lemma 2.9]{proba}, that $$E[S_{X,\ell}]=\sum_{i=1}^{+\infty}\P(S_{X,\ell}\geq i).$$ Since $\P(S_{X,\ell}\geq i)\geq \P(S_{X,\ell}\geq i+1)$, one has $$E[S_{X,\ell}]=\sum_{i=1}^{+\infty}\P(S_{X,\ell}\geq i)\leq -\P(S_{X,\ell}\geq 1)+2 \sum_{i=1}^{+\infty}\P(S_{X,\ell}\geq 2i).$$ +\P(S_{X,\ell}\geq 1)+\P(S_{X,\ell}\geq 2)+2 \sum_{i=1}^{+\infty}\P(S_{X,\ell}\geq 2i).$$ Consequently, -$$E[S_{X,\ell}]\leq 1+2 -\sum_{i=1}^{+\infty}\left(1-\frac{1}{4n^2}\right)^i=1+2(4n^2-1)=8n^2-2,$$ +$$E[S_{X,\ell}]\leq 1+1+2 +\sum_{i=1}^{+\infty}\left(1-\frac{1}{4n^2}\right)^i=2+2(4n^2-1)=8n^2,$$ which concludes the proof. \end{proof} @@ -236,7 +337,7 @@ Consequently, $E[\ts^\prime]\leq n (-\frac{1}{2}+\ln(n+1))\leq n\ln(n+1)$. \end{proof} -One can now prove Theo~\ref{prop:stop}. +One can now prove Theorem~\ref{prop:stop}. \begin{proof} One has $\ts\leq \ts^\prime+\lambda_h$. Therefore, @@ -244,4 +345,10 @@ Theorem~\ref{prop:stop} is a direct application of lemma~\ref{prop:lambda} and~\ref{lm:stopprime}. \end{proof} -\end{document} +Notice that the calculus of the stationary time upper bound is obtained +under the following constraint: for each vertex in the $\mathsf{N}$-cube +there are one ongoing arc and one outgoing arc that are removed. +The calculus does not consider (balanced) hamiltonian cycles, which +are more regular and more binding than this constraint. +In this later context, we claim that the upper bound for the stopping time +should be reduced.