X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/rairo15.git/blobdiff_plain/6537ad6b39c8648e4c3597b469e79e505193e358..ac6bb3190276a11caaed94b7443f95f697328bcd:/stopping.tex diff --git a/stopping.tex b/stopping.tex index 9664549..0ad3e8a 100644 --- a/stopping.tex +++ b/stopping.tex @@ -1,3 +1,80 @@ + + + +Let thus be given such kind of map. +This article focuses on studying its iterations according to +the equation~(\ref{eq:asyn}) with a given strategy. +First of all, this can be interpreted as walking into its iteration graph +where the choice of the edge to follow is decided by the strategy. +Notice that the iteration graph is always a subgraph of +${\mathsf{N}}$-cube augmented with all the self-loop, \textit{i.e.}, all the +edges $(v,v)$ for any $v \in \Bool^{\mathsf{N}}$. +Next, if we add probabilities on the transition graph, iterations can be +interpreted as Markov chains. + +\begin{xpl} +Let us consider for instance +the graph $\Gamma(f)$ defined +in \textsc{Figure~\ref{fig:iteration:f*}.} and +the probability function $p$ defined on the set of edges as follows: +$$ +p(e) \left\{ +\begin{array}{ll} += \frac{2}{3} \textrm{ if $e=(v,v)$ with $v \in \Bool^3$,}\\ += \frac{1}{6} \textrm{ otherwise.} +\end{array} +\right. +$$ +The matrix $P$ of the Markov chain associated to the function $f^*$ and to its probability function $p$ is +\[ +P=\dfrac{1}{6} \left( +\begin{array}{llllllll} +4&1&1&0&0&0&0&0 \\ +1&4&0&0&0&1&0&0 \\ +0&0&4&1&0&0&1&0 \\ +0&1&1&4&0&0&0&0 \\ +1&0&0&0&4&0&1&0 \\ +0&0&0&0&1&4&0&1 \\ +0&0&0&0&1&0&4&1 \\ +0&0&0&1&0&1&0&4 +\end{array} +\right) +\] +\end{xpl} + + +% % Let us first recall the \emph{Total Variation} distance $\tv{\pi-\mu}$, +% % which is defined for two distributions $\pi$ and $\mu$ on the same set +% % $\Bool^n$ by: +% % $$\tv{\pi-\mu}=\max_{A\subset \Bool^n} |\pi(A)-\mu(A)|.$$ +% % It is known that +% % $$\tv{\pi-\mu}=\frac{1}{2}\sum_{x\in\Bool^n}|\pi(x)-\mu(x)|.$$ + +% % Let then $M(x,\cdot)$ be the +% % distribution induced by the $x$-th row of $M$. If the Markov chain +% % induced by +% % $M$ has a stationary distribution $\pi$, then we define +% % $$d(t)=\max_{x\in\Bool^n}\tv{M^t(x,\cdot)-\pi}.$$ +% Intuitively $d(t)$ is the largest deviation between +% the distribution $\pi$ and $M^t(x,\cdot)$, which +% is the result of iterating $t$ times the function. +% Finally, let $\varepsilon$ be a positive number, the \emph{mixing time} +% with respect to $\varepsilon$ is given by +% $$t_{\rm mix}(\varepsilon)=\min\{t \mid d(t)\leq \varepsilon\}.$$ +% It defines the smallest iteration number +% that is sufficient to obtain a deviation lesser than $\varepsilon$. +% Notice that the upper and lower bounds of mixing times cannot +% directly be computed with eigenvalues formulae as expressed +% in~\cite[Chap. 12]{LevinPeresWilmer2006}. The authors of this latter work +% only consider reversible Markov matrices whereas we do no restrict our +% matrices to such a form. + + + + + + + This section considers functions $f: \Bool^n \rightarrow \Bool^n $ issued from an hypercube where an Hamiltonian path has been removed. A specific random walk in this modified hypercube is first @@ -268,4 +345,10 @@ Theorem~\ref{prop:stop} is a direct application of lemma~\ref{prop:lambda} and~\ref{lm:stopprime}. \end{proof} - +Notice that the calculus of the stationary time upper bound is obtained +under the following constraint: for each vertex in the $\mathsf{N}$-cube +there are one ongoing arc and one outgoing arc that are removed. +The calculus does not consider (balanced) hamiltonian cycles, which +are more regular and more binding than this constraint. +In this later context, we claim that the upper bound for the stopping time +should be reduced.