+
+Before being able to study the topological behavior of the general
+chaotic iterations, we must firstly establish that:
+
+\begin{proposition}
+ For all $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, the function $G_f$ is continuous on
+$\left( \mathcal{X},d\right)$.
+\end{proposition}
+
+
+\begin{proof}
+We use the sequential continuity.
+Let $(S^n,E^n)_{n\in \mathds{N}}$ be a sequence of the phase space $%
+\mathcal{X}$, which converges to $(S,E)$. We will prove that $\left(
+G_{f}(S^n,E^n)\right) _{n\in \mathds{N}}$ converges to $\left(
+G_{f}(S,E)\right) $. Let us remark that for all $n$, $S^n$ is a strategy,
+thus, we consider a sequence of strategies (\emph{i.e.}, a sequence of
+sequences).\newline
+As $d((S^n,E^n);(S,E))$ converges to 0, each distance $d_{e}(E^n,E)$ and $d_{s}(S^n,S)$ converges
+to 0. But $d_{e}(E^n,E)$ is an integer, so $\exists n_{0}\in \mathds{N},$ $%
+d_{e}(E^n,E)=0$ for any $n\geqslant n_{0}$.\newline
+In other words, there exists a threshold $n_{0}\in \mathds{N}$ after which no
+cell will change its state:
+$\exists n_{0}\in \mathds{N},n\geqslant n_{0}\Rightarrow E^n = E.$
+
+In addition, $d_{s}(S^n,S)\longrightarrow 0,$ so $\exists n_{1}\in %
+\mathds{N},d_{s}(S^n,S)<10^{-1}$ for all indexes greater than or equal to $%
+n_{1}$. This means that for $n\geqslant n_{1}$, all the $S^n$ have the same
+first term, which is $S^0$: $\forall n\geqslant n_{1},S_0^n=S_0.$
+
+Thus, after the $max(n_{0},n_{1})^{th}$ term, states of $E^n$ and $E$ are
+identical and strategies $S^n$ and $S$ start with the same first term.\newline
+Consequently, states of $G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are equal,
+so, after the $max(n_0, n_1)^{th}$ term, the distance $d$ between these two points is strictly less than 1.\newline
+\noindent We now prove that the distance between $\left(
+G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is convergent to
+0. Let $\varepsilon >0$. \medskip
+\begin{itemize}
+\item If $\varepsilon \geqslant 1$, we see that distance
+between $\left( G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is
+strictly less than 1 after the $max(n_{0},n_{1})^{th}$ term (same state).
+\medskip
+\item If $\varepsilon <1$, then $\exists k\in \mathds{N},10^{-k}\geqslant
+\varepsilon > 10^{-(k+1)}$. But $d_{s}(S^n,S)$ converges to 0, so
+\begin{equation*}
+\exists n_{2}\in \mathds{N},\forall n\geqslant
+n_{2},d_{s}(S^n,S)<10^{-(k+2)},
+\end{equation*}%
+thus after $n_{2}$, the $k+2$ first terms of $S^n$ and $S$ are equal.
+\end{itemize}
+\noindent As a consequence, the $k+1$ first entries of the strategies of $%
+G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are the same ($G_{f}$ is a shift of strategies) and due to the definition of $d_{s}$, the floating part of
+the distance between $(S^n,E^n)$ and $(S,E)$ is strictly less than $%
+10^{-(k+1)}\leqslant \varepsilon $.\bigskip \newline
+In conclusion,
+$$
+\forall \varepsilon >0,\exists N_{0}=max(n_{0},n_{1},n_{2})\in \mathds{N}%
+,\forall n\geqslant N_{0},
+ d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right)
+\leqslant \varepsilon .
+$$
+$G_{f}$ is consequently continuous.
+\end{proof}
+
+
+It is now possible to study the topological behavior of the general chaotic
+iterations. We will prove that,
+
+\begin{theorem}
+\label{t:chaos des general}
+ The general chaotic iterations defined on Equation~\ref{general CIs} satisfy
+the Devaney's property of chaos.
+\end{theorem}
+
+Let us firstly prove the following lemma.
+
+\begin{lemma}[Strong transitivity]
+\label{strongTrans}
+ For all couples $X,Y \in \mathcal{X}$ and any neighborhood $V$ of $X$, we can
+find $n \in \mathds{N}^*$ and $X' \in V$ such that $G^n(X')=Y$.
+\end{lemma}
+
+\begin{proof}
+ Let $X=(S,E)$, $\varepsilon>0$, and $k_0 = \lfloor log_{10}(\varepsilon)+1 \rfloor$.
+Any point $X'=(S',E')$ such that $E'=E$ and $\forall k \leqslant k_0, S'^k=S^k$,
+are in the open ball $\mathcal{B}\left(X,\varepsilon\right)$. Let us define
+$\check{X} = \left(\check{S},\check{E}\right)$, where $\check{X}= G^{k_0}(X)$.
+We denote by $s\subset \llbracket 1; \mathsf{N} \rrbracket$ the set of coordinates
+that are different between $\check{E}$ and the state of $Y$. Thus each point $X'$ of
+the form $(S',E')$ where $E'=E$ and $S'$ starts with
+$(S^0, S^1, \hdots, S^{k_0},s,\hdots)$, verifies the following properties:
+\begin{itemize}
+ \item $X'$ is in $\mathcal{B}\left(X,\varepsilon\right)$,
+ \item the state of $G_f^{k_0+1}(X')$ is the state of $Y$.
+\end{itemize}
+Finally the point $\left(\left(S^0, S^1, \hdots, S^{k_0},s,s^0, s^1, \hdots\right); E\right)$,
+where $(s^0,s^1, \hdots)$ is the strategy of $Y$, satisfies the properties
+claimed in the lemma.
+\end{proof}
+
+We can now prove the Theorem~\ref{t:chaos des general}...
+
+\begin{proof}[Theorem~\ref{t:chaos des general}]
+Firstly, strong transitivity implies transitivity.
+
+Let $(S,E) \in\mathcal{X}$ and $\varepsilon >0$. To
+prove that $G_f$ is regular, it is sufficient to prove that
+there exists a strategy $\tilde S$ such that the distance between
+$(\tilde S,E)$ and $(S,E)$ is less than $\varepsilon$, and such that
+$(\tilde S,E)$ is a periodic point.
+
+Let $t_1=\lfloor-\log_{10}(\varepsilon)\rfloor$, and let $E'$ be the
+configuration that we obtain from $(S,E)$ after $t_1$ iterations of
+$G_f$. As $G_f$ is strongly transitive, there exists a strategy $S'$
+and $t_2\in\mathds{N}$ such
+that $E$ is reached from $(S',E')$ after $t_2$ iterations of $G_f$.
+
+Consider the strategy $\tilde S$ that alternates the first $t_1$ terms
+of $S$ and the first $t_2$ terms of $S'$: $$\tilde
+S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It
+is clear that $(\tilde S,E)$ is obtained from $(\tilde S,E)$ after
+$t_1+t_2$ iterations of $G_f$. So $(\tilde S,E)$ is a periodic
+point. Since $\tilde S_t=S_t$ for $t<t_1$, by the choice of $t_1$, we
+have $d((S,E),(\tilde S,E))<\epsilon$.
+\end{proof}
+
+
+
+\section{Efficient PRNG based on Chaotic Iterations}
+
+In order to implement efficiently a PRNG based on chaotic iterations it is
+possible to improve previous works [ref]. One solution consists in considering
+that the strategy used contains all the bits for which the negation is
+achieved out. Then in order to apply the negation on these bits we can simply
+apply the xor operator between the current number and the strategy. In
+order to obtain the strategy we also use a classical PRNG.
+
+Here is an example with 16-bits numbers showing how the bitwise operations
+are
+applied. Suppose that $x$ and the strategy $S^i$ are defined in binary mode.
+Then the following table shows the result of $x$ xor $S^i$.
+$$
+\begin{array}{|cc|cccccccccccccccc|}
+\hline
+x &=&1&0&1&1&1&0&1&0&1&0&0&1&0&0&1&0\\
+\hline
+S^i &=&0&1&1&0&0&1&1&0&1&1&1&0&0&1&1&1\\
+\hline
+x \oplus S^i&=&1&1&0&1&1&1&0&0&0&1&1&1&0&1&0&1\\
+\hline
+
+\hline
+ \end{array}
+$$
+
+%% \begin{figure}[htbp]
+%% \begin{center}
+%% \fbox{
+%% \begin{minipage}{14cm}
+%% unsigned int CIprng() \{\\
+%% static unsigned int x = 123123123;\\
+%% unsigned long t1 = xorshift();\\
+%% unsigned long t2 = xor128();\\
+%% unsigned long t3 = xorwow();\\
+%% x = x\textasciicircum (unsigned int)t1;\\
+%% x = x\textasciicircum (unsigned int)(t2$>>$32);\\
+%% x = x\textasciicircum (unsigned int)(t3$>>$32);\\
+%% x = x\textasciicircum (unsigned int)t2;\\
+%% x = x\textasciicircum (unsigned int)(t1$>>$32);\\
+%% x = x\textasciicircum (unsigned int)t3;\\
+%% return x;\\
+%% \}
+%% \end{minipage}
+%% }
+%% \end{center}
+%% \caption{sequential Chaotic Iteration PRNG}
+%% \label{algo:seqCIprng}
+%% \end{figure}
+
+
+
+\lstset{language=C,caption={C code of the sequential chaotic iterations based
+PRNG},label=algo:seqCIprng}
+\begin{lstlisting}
+unsigned int CIprng() {
+ static unsigned int x = 123123123;
+ unsigned long t1 = xorshift();
+ unsigned long t2 = xor128();
+ unsigned long t3 = xorwow();
+ x = x^(unsigned int)t1;
+ x = x^(unsigned int)(t2>>32);
+ x = x^(unsigned int)(t3>>32);
+ x = x^(unsigned int)t2;
+ x = x^(unsigned int)(t1>>32);
+ x = x^(unsigned int)t3;
+ return x;
+}
+\end{lstlisting}
+
+
+
+
+
+In listing~\ref{algo:seqCIprng} a sequential version of our chaotic iterations
+based PRNG is presented. The xor operator is represented by \textasciicircum.
+This function uses three classical 64-bits PRNG: the \texttt{xorshift}, the
+\texttt{xor128} and the \texttt{xorwow}. In the following, we call them
+xor-like PRNGSs. These three PRNGs are presented in~\cite{Marsaglia2003}. As
+each xor-like PRNG used works with 64-bits and as our PRNG works with 32-bits,
+the use of \texttt{(unsigned int)} selects the 32 least significant bits whereas
+\texttt{(unsigned int)(t3$>>$32)} selects the 32 most significants bits of the
+variable \texttt{t}. So to produce a random number realizes 6 xor operations
+with 6 32-bits numbers produced by 3 64-bits PRNG. This version successes the
+BigCrush of the TestU01 battery~\cite{LEcuyerS07}.
+
+\section{Efficient prng based on chaotic iterations on GPU}
+
+In order to benefit from computing power of GPU, a program needs to define
+independent blocks of threads which can be computed simultaneously. In general,
+the larger the number of threads is, the more local memory is used and the less
+branching instructions are used (if, while, ...), the better performance is
+obtained on GPU. So with algorithm \ref{algo:seqCIprng} presented in the
+previous section, it is possible to build a similar program which computes PRNG
+on GPU. In the CUDA [ref] environment, threads have a local identificator,
+called \texttt{ThreadIdx} relative to the block containing them.
+
+
+\subsection{Naive version for GPU}
+
+From the CPU version, it is possible to obtain a quite similar version for GPU.
+The principe consists in assigning the computation of a PRNG as in sequential to
+each thread of the GPU. Of course, it is essential that the three xor-like
+PRNGs used for our computation have different parameters. So we chose them
+randomly with another PRNG. As the initialisation is performed by the CPU, we
+have chosen to use the ISAAC PRNG [ref] to initalize all the parameters for the
+GPU version of our PRNG. The implementation of the three xor-like PRNGs is
+straightforward as soon as their parameters have been allocated in the GPU
+memory. Each xor-like PRNGs used works with an internal number $x$ which keeps
+the last generated random numbers. Other internal variables are also used by the
+xor-like PRNGs. More precisely, the implementation of the xor128, the xorshift
+and the xorwow respectively require 4, 5 and 6 unsigned long as internal
+variables.
+
+\begin{algorithm}
+
+\KwIn{InternalVarXorLikeArray: array with internal variables of the 3 xor-like
+PRNGs in global memory\;
+NumThreads: Number of threads\;}
+\KwOut{NewNb: array containing random numbers in global memory}
+\If{threadIdx is concerned by the computation} {
+ retrieve data from InternalVarXorLikeArray[threadIdx] in local variables\;
+ \For{i=1 to n} {
+ compute a new PRNG as in Listing\ref{algo:seqCIprng}\;
+ store the new PRNG in NewNb[NumThreads*threadIdx+i]\;
+ }
+ store internal variables in InternalVarXorLikeArray[threadIdx]\;
+}
+
+\caption{main kernel for the chaotic iterations based PRNG GPU naive version}
+\label{algo:gpu_kernel}
+\end{algorithm}
+
+Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of PRNG using
+GPU. According to the available memory in the GPU and the number of threads
+used simultenaously, the number of random numbers that a thread can generate
+inside a kernel is limited, i.e. the variable \texttt{n} in
+algorithm~\ref{algo:gpu_kernel}. For example, if $100,000$ threads are used and
+if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)}
+then the memory required to store internals variables of xor-like
+PRNGs\footnote{we multiply this number by $2$ in order to count 32-bits numbers}
+and random number of our PRNG is equals to $100,000\times ((4+5+6)\times
+2+(1+100))=1,310,000$ 32-bits numbers, i.e. about $52$Mb.
+
+All the tests performed to pass the BigCrush of TestU01 succeeded. Different
+number of threads, called \texttt{NumThreads} in our algorithm, have been tested
+upto $10$ millions.
+
+\begin{remark}
+Algorithm~\ref{algo:gpu_kernel} has the advantage to manipulate independent
+PRNGs, so this version is easily usable on a cluster of computer. The only thing
+to ensure is to use a single ISAAC PRNG. For this, a simple solution consists in
+using a master node for the initialization which computes the initial parameters
+for all the differents nodes involves in the computation.
+\end{remark}
+
+\subsection{Improved version for GPU}
+
+As GPU cards using CUDA have shared memory between threads of the same block, it
+is possible to use this feature in order to simplify the previous algorithm,
+i.e., using less than 3 xor-like PRNGs. The solution consists in computing only
+one xor-like PRNG by thread, saving it into shared memory and using the results
+of some other threads in the same block of threads. In order to define which
+thread uses the result of which other one, we can use a permutation array which
+contains the indexes of all threads and for which a permutation has been
+performed. In Algorithm~\ref{algo:gpu_kernel2}, 2 permutations arrays are used.
+The variable \texttt{offset} is computed using the value of
+\texttt{permutation\_size}. Then we can compute \texttt{o1} and \texttt{o2}
+which represent the indexes of the other threads for which the results are used
+by the current thread. In the algorithm, we consider that a 64-bits xor-like
+PRNG is used, that is why both 32-bits parts are used.
+
+This version also succeed to the BigCrush batteries of tests.
+
+\begin{algorithm}
+
+\KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs
+in global memory\;
+NumThreads: Number of threads\;
+tab1, tab2: Arrays containing permutations of size permutation\_size\;}
+
+\KwOut{NewNb: array containing random numbers in global memory}
+\If{threadId is concerned} {
+ retrieve data from InternalVarXorLikeArray[threadId] in local variables\;
+ offset = threadIdx\%permutation\_size\;
+ o1 = threadIdx-offset+tab1[offset]\;
+ o2 = threadIdx-offset+tab2[offset]\;
+ \For{i=1 to n} {
+ t=xor-like()\;
+ shared\_mem[threadId]=(unsigned int)t\;
+ x = x $\oplus$ (unsigned int) t\;
+ x = x $\oplus$ (unsigned int) (t>>32)\;
+ x = x $\oplus$ shared[o1]\;
+ x = x $\oplus$ shared[o2]\;
+
+ store the new PRNG in NewNb[NumThreads*threadId+i]\;
+ }
+ store internal variables in InternalVarXorLikeArray[threadId]\;
+}
+
+\caption{main kernel for the chaotic iterations based PRNG GPU efficient
+version}
+\label{algo:gpu_kernel2}
+\end{algorithm}
+
+\subsection{Theoretical Evaluation of the Improved Version}
+
+A run of Algorithm~\ref{algo:gpu_kernel2} consists in four operations having
+the form of Equation~\ref{equation Oplus}, which is equivalent to the iterative
+system of Eq.~\ref{eq:generalIC}. That is, four iterations of the general chaotic
+iterations are realized between two stored values of the PRNG.
+To be certain that we are in the framework of Theorem~\ref{t:chaos des general},
+we must guarantee that this dynamical system iterates on the space
+$\mathcal{X} = \mathcal{P}\left(\llbracket 1, \mathsf{N} \rrbracket\right)^\mathds{N}\times\mathds{B}^\mathsf{N}$.
+The left term $x$ obviously belongs into $\mathds{B}^ \mathsf{N}$.
+To prevent from any flaws of chaotic properties, we must check that each right
+term, corresponding to terms of the strategies, can possibly be equal to any
+integer of $\llbracket 1, \mathsf{N} \rrbracket$.
+
+Such a result is obvious for the two first lines, as for the xor-like(), all the
+integers belonging into its interval of definition can occur at each iteration.
+It can be easily stated for the two last lines by an immediate mathematical
+induction.
+
+Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general
+chaotic iterations presented previously, and for this reason, it satisfies the
+Devaney's formulation of a chaotic behavior.
+
+\section{Experiments}
+
+Different experiments have been performed in order to measure the generation
+speed.
+\begin{figure}[t]
+\begin{center}
+ \includegraphics[scale=.7]{curve_time_gpu.pdf}
+\end{center}
+\caption{Number of random numbers generated per second}
+\label{fig:time_naive_gpu}
+\end{figure}
+
+
+First of all we have compared the time to generate X random numbers with both
+the CPU version and the GPU version.
+
+Faire une courbe du nombre de random en fonction du nombre de threads,
+éventuellement en fonction du nombres de threads par bloc.