+Random numbers are used in many scientific applications and simulations. On
+finite state machines, as computers, it is not possible to generate random
+numbers but only pseudo-random numbers. In practice, a good pseudo-random number
+generator (PRNG) needs to verify some features to be used by scientists. It is
+important to be able to generate pseudo-random numbers efficiently, the
+generation needs to be reproducible and a PRNG needs to satisfy many usual
+statistical properties. Finally, from our point a view, it is essential to prove
+that a PRNG is chaotic. Concerning the statistical tests, TestU01 is the
+best-known public-domain statistical testing package. So we use it for all our
+PRNGs, especially the {\it BigCrush} which provides the largest serie of tests.
+Concerning the chaotic properties, Devaney~\cite{Devaney} proposed a common
+mathematical formulation of chaotic dynamical systems.
+
+In a previous work~\cite{bgw09:ip} we have proposed a new familly of chaotic
+PRNG based on chaotic iterations (IC). We have proven that these PRNGs are
+chaotic in the Devaney's sense. In this paper we propose a faster version which
+is also proven to be chaotic.
+
+Although graphics processing units (GPU) was initially designed to accelerate
+the manipulation of images, they are nowadays commonly used in many scientific
+applications. Therefore, it is important to be able to generate pseudo-random
+numbers inside a GPU when a scientific application runs in a GPU. That is why we
+also provide an efficient PRNG for GPU respecting based on IC. Such devices
+allows us to generated almost 20 billions of random numbers per second.
+
+In order to establish that our PRNGs are chaotic according to the Devaney's
+formulation, we extend what we have proposed in~\cite{guyeux10}. Moreover, we define a new distance to measure the disorder in the chaos and we prove some interesting properties with this distance.
+
+The rest of this paper is organised as follows. In Section~\ref{section:related
+ works} we review some GPU implementions of PRNG. Section~\ref{section:BASIC RECALLS} gives some basic recalls on Devanay's formation of chaos and
+chaotic iterations. In Section~\ref{sec:pseudo-random} the proof of chaos of our
+PRNGs is studied. Section~\ref{sec:efficient prng} presents an efficient
+implementation of our chaotic PRNG on a CPU. Section~\ref{sec:efficient prng
+ gpu} describes the GPU implementation of our chaotic PRNG. In
+Section~\ref{sec:experiments} some experimentations are presented.
+Section~\ref{sec:de la relativité du désordre} describes the relativity of
+disorder. In Section~\ref{sec: chaos order topology} the proof that chaotic
+iterations can be described by iterations on a real interval is established. Finally, we give a conclusion and some perspectives.
+
+
+
+
+\section{Related works on GPU based PRNGs}
+\label{section:related works}
+In the litterature many authors have work on defining GPU based PRNGs. We do not
+want to be exhaustive and we just give the most significant works from our point
+of view. When authors mention the number of random numbers generated per second
+we mention it. We consider that a million numbers per second corresponds to
+1MSample/s and than a billion numbers per second corresponds to 1GSample/s.
+
+In \cite{Pang:2008:cec}, the authors define a PRNG based on cellular automata
+which does not require high precision integer arithmetics nor bitwise
+operations. There is no mention of statistical tests nor proof that this PRNG is
+chaotic. Concerning the speed of generation, they can generate about
+3.2MSample/s on a GeForce 7800 GTX GPU (which is quite old now).
+
+In \cite{ZRKB10}, the authors propose different versions of efficient GPU PRNGs
+based on Lagged Fibonacci, Hybrid Taus or Hybrid Taus. They have used these
+PRNGs for Langevin simulations of biomolecules fully implemented on
+GPU. Performance of the GPU versions are far better than those obtained with a
+CPU and these PRNGs succeed to pass the {\it BigCrush} test of TestU01. There is
+no mention that their PRNGs have chaos mathematical properties.
+
+
+Authors of~\cite{conf/fpga/ThomasHL09} have studied the implementation of some
+PRNGs on diferrent computing architectures: CPU, field-programmable gate array
+(FPGA), GPU and massively parallel processor. This study is interesting because
+it shows the performance of the same PRNGs on different architeture. For
+example, the FPGA is globally the fastest architecture and it is also the
+efficient one because it provides the fastest number of generated random numbers
+per joule. Concerning the GPU, authors can generate betweend 11 and 16GSample/s
+with a GTX 280 GPU. The drawback of this work is that those PRNGs only succeed
+the {\it Crush} test which is easier than the {\it Big Crush} test.
+\newline
+\newline
+To the best of our knowledge no GPU implementation have been proven to have chaotic properties.
+
+\section{Basic Recalls}
+\label{section:BASIC RECALLS}
+This section is devoted to basic definitions and terminologies in the fields of
+topological chaos and chaotic iterations.
+\subsection{Devaney's Chaotic Dynamical Systems}
+
+In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and $V_{i}$
+denotes the $i^{th}$ component of a vector $V$. $f^{k}=f\circ ...\circ f$
+is for the $k^{th}$ composition of a function $f$. Finally, the following
+notation is used: $\llbracket1;N\rrbracket=\{1,2,\hdots,N\}$.
+
+
+Consider a topological space $(\mathcal{X},\tau)$ and a continuous function $f :
+\mathcal{X} \rightarrow \mathcal{X}$.
+
+\begin{definition}
+$f$ is said to be \emph{topologically transitive} if, for any pair of open sets
+$U,V \subset \mathcal{X}$, there exists $k>0$ such that $f^k(U) \cap V \neq
+\varnothing$.
+\end{definition}
+
+\begin{definition}
+An element $x$ is a \emph{periodic point} for $f$ of period $n\in \mathds{N}^*$
+if $f^{n}(x)=x$.% The set of periodic points of $f$ is denoted $Per(f).$
+\end{definition}
+
+\begin{definition}
+$f$ is said to be \emph{regular} on $(\mathcal{X}, \tau)$ if the set of periodic
+points for $f$ is dense in $\mathcal{X}$: for any point $x$ in $\mathcal{X}$,
+any neighborhood of $x$ contains at least one periodic point (without
+necessarily the same period).
+\end{definition}
+
+
+\begin{definition}[Devaney's formulation of chaos~\cite{Devaney}]
+$f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and
+topologically transitive.
+\end{definition}
+
+The chaos property is strongly linked to the notion of ``sensitivity'', defined
+on a metric space $(\mathcal{X},d)$ by:
+
+\begin{definition}
+\label{sensitivity} $f$ has \emph{sensitive dependence on initial conditions}
+if there exists $\delta >0$ such that, for any $x\in \mathcal{X}$ and any
+neighborhood $V$ of $x$, there exist $y\in V$ and $n > 0$ such that
+$d\left(f^{n}(x), f^{n}(y)\right) >\delta $.
+
+$\delta$ is called the \emph{constant of sensitivity} of $f$.
+\end{definition}
+
+Indeed, Banks \emph{et al.} have proven in~\cite{Banks92} that when $f$ is
+chaotic and $(\mathcal{X}, d)$ is a metric space, then $f$ has the property of
+sensitive dependence on initial conditions (this property was formerly an
+element of the definition of chaos). To sum up, quoting Devaney
+in~\cite{Devaney}, a chaotic dynamical system ``is unpredictable because of the
+sensitive dependence on initial conditions. It cannot be broken down or
+simplified into two subsystems which do not interact because of topological
+transitivity. And in the midst of this random behavior, we nevertheless have an
+element of regularity''. Fundamentally different behaviors are consequently
+possible and occur in an unpredictable way.
+
+
+
+\subsection{Chaotic Iterations}
+\label{sec:chaotic iterations}
+
+
+Let us consider a \emph{system} with a finite number $\mathsf{N} \in
+\mathds{N}^*$ of elements (or \emph{cells}), so that each cell has a
+Boolean \emph{state}. Having $\mathsf{N}$ Boolean values for these
+ cells leads to the definition of a particular \emph{state of the
+system}. A sequence which elements belong to $\llbracket 1;\mathsf{N}
+\rrbracket $ is called a \emph{strategy}. The set of all strategies is
+denoted by $\llbracket 1, \mathsf{N} \rrbracket^\mathds{N}.$
+
+\begin{definition}
+\label{Def:chaotic iterations}
+The set $\mathds{B}$ denoting $\{0,1\}$, let
+$f:\mathds{B}^{\mathsf{N}}\longrightarrow \mathds{B}^{\mathsf{N}}$ be
+a function and $S\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ be a ``strategy''. The so-called
+\emph{chaotic iterations} are defined by $x^0\in
+\mathds{B}^{\mathsf{N}}$ and
+\begin{equation}
+\forall n\in \mathds{N}^{\ast }, \forall i\in
+\llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
+\begin{array}{ll}
+ x_i^{n-1} & \text{ if }S^n\neq i \\
+ \left(f(x^{n-1})\right)_{S^n} & \text{ if }S^n=i.
+\end{array}\right.
+\end{equation}
+\end{definition}
+
+In other words, at the $n^{th}$ iteration, only the $S^{n}-$th cell is
+\textquotedblleft iterated\textquotedblright . Note that in a more
+general formulation, $S^n$ can be a subset of components and
+$\left(f(x^{n-1})\right)_{S^{n}}$ can be replaced by
+$\left(f(x^{k})\right)_{S^{n}}$, where $k<n$, describing for example,
+delays transmission~\cite{Robert1986,guyeux10}. Finally, let us remark that
+the term ``chaotic'', in the name of these iterations, has \emph{a
+priori} no link with the mathematical theory of chaos, presented above.
+
+
+Let us now recall how to define a suitable metric space where chaotic iterations
+are continuous. For further explanations, see, e.g., \cite{guyeux10}.
+
+Let $\delta $ be the \emph{discrete Boolean metric}, $\delta
+(x,y)=0\Leftrightarrow x=y.$ Given a function $f$, define the function:
+\begin{equation}
+\begin{array}{lrll}
+F_{f}: & \llbracket1;\mathsf{N}\rrbracket\times \mathds{B}^{\mathsf{N}} &
+\longrightarrow & \mathds{B}^{\mathsf{N}} \\
+& (k,E) & \longmapsto & \left( E_{j}.\delta (k,j)+f(E)_{k}.\overline{\delta
+(k,j)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},%
+\end{array}%
+\end{equation}%
+\noindent where + and . are the Boolean addition and product operations.
+Consider the phase space:
+\begin{equation}
+\mathcal{X} = \llbracket 1 ; \mathsf{N} \rrbracket^\mathds{N} \times
+\mathds{B}^\mathsf{N},
+\end{equation}
+\noindent and the map defined on $\mathcal{X}$:
+\begin{equation}
+G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), \label{Gf}
+\end{equation}
+\noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
+(S^{n})_{n\in \mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow (S^{n+1})_{n\in
+\mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ and $i$ is the \emph{initial function}
+$i:(S^{n})_{n\in \mathds{N}} \in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow S^{0}\in \llbracket
+1;\mathsf{N}\rrbracket$. Then the chaotic iterations proposed in
+Definition \ref{Def:chaotic iterations} can be described by the following iterations:
+\begin{equation}
+\left\{
+\begin{array}{l}
+X^0 \in \mathcal{X} \\
+X^{k+1}=G_{f}(X^k).%
+\end{array}%
+\right.
+\end{equation}%
+
+With this formulation, a shift function appears as a component of chaotic
+iterations. The shift function is a famous example of a chaotic
+map~\cite{Devaney} but its presence is not sufficient enough to claim $G_f$ as
+chaotic.
+To study this claim, a new distance between two points $X = (S,E), Y =
+(\check{S},\check{E})\in
+\mathcal{X}$ has been introduced in \cite{guyeux10} as follows:
+\begin{equation}
+d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
+\end{equation}
+\noindent where
+\begin{equation}
+\left\{
+\begin{array}{lll}
+\displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
+}\delta (E_{k},\check{E}_{k})}, \\
+\displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
+\sum_{k=1}^{\infty }\dfrac{|S^k-\check{S}^k|}{10^{k}}}.%
+\end{array}%
+\right.
+\end{equation}
+
+
+This new distance has been introduced to satisfy the following requirements.
+\begin{itemize}
+\item When the number of different cells between two systems is increasing, then
+their distance should increase too.
+\item In addition, if two systems present the same cells and their respective
+strategies start with the same terms, then the distance between these two points
+must be small because the evolution of the two systems will be the same for a
+while. Indeed, the two dynamical systems start with the same initial condition,
+use the same update function, and as strategies are the same for a while, then
+components that are updated are the same too.
+\end{itemize}
+The distance presented above follows these recommendations. Indeed, if the floor
+value $\lfloor d(X,Y)\rfloor $ is equal to $n$, then the systems $E, \check{E}$
+differ in $n$ cells ($d_e$ is indeed the Hamming distance). In addition, $d(X,Y) - \lfloor d(X,Y) \rfloor $ is a
+measure of the differences between strategies $S$ and $\check{S}$. More
+precisely, this floating part is less than $10^{-k}$ if and only if the first
+$k$ terms of the two strategies are equal. Moreover, if the $k^{th}$ digit is
+nonzero, then the $k^{th}$ terms of the two strategies are different.
+The impact of this choice for a distance will be investigate at the end of the document.
+
+Finally, it has been established in \cite{guyeux10} that,
+
+\begin{proposition}
+Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. Then $G_{f}$ is continuous in
+the metric space $(\mathcal{X},d)$.
+\end{proposition}
+
+The chaotic property of $G_f$ has been firstly established for the vectorial
+Boolean negation $f(x_1,\hdots, x_\mathsf{N}) = (\overline{x_1},\hdots, \overline{x_\mathsf{N}})$ \cite{guyeux10}. To obtain a characterization, we have secondly
+introduced the notion of asynchronous iteration graph recalled bellow.
+
+Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. The
+{\emph{asynchronous iteration graph}} associated with $f$ is the
+directed graph $\Gamma(f)$ defined by: the set of vertices is
+$\mathds{B}^\mathsf{N}$; for all $x\in\mathds{B}^\mathsf{N}$ and
+$i\in \llbracket1;\mathsf{N}\rrbracket$,
+the graph $\Gamma(f)$ contains an arc from $x$ to $F_f(i,x)$.
+The relation between $\Gamma(f)$ and $G_f$ is clear: there exists a
+path from $x$ to $x'$ in $\Gamma(f)$ if and only if there exists a
+strategy $s$ such that the parallel iteration of $G_f$ from the
+initial point $(s,x)$ reaches the point $x'$.
+
+We have finally proven in \cite{bcgr11:ip} that,
+
+
+\begin{theorem}
+\label{Th:Caractérisation des IC chaotiques}
+Let $f:\mathds{B}^\mathsf{N}\to\mathds{B}^\mathsf{N}$. $G_f$ is chaotic (according to Devaney)
+if and only if $\Gamma(f)$ is strongly connected.
+\end{theorem}
+
+This result of chaos has lead us to study the possibility to build a
+pseudo-random number generator (PRNG) based on the chaotic iterations.
+As $G_f$, defined on the domain $\llbracket 1 ; \mathsf{N} \rrbracket^{\mathds{N}}
+\times \mathds{B}^\mathsf{N}$, is build from Boolean networks $f : \mathds{B}^\mathsf{N}
+\rightarrow \mathds{B}^\mathsf{N}$, we can preserve the theoretical properties on $G_f$
+during implementations (due to the discrete nature of $f$). It is as if
+$\mathds{B}^\mathsf{N}$ represents the memory of the computer whereas $\llbracket 1 ; \mathsf{N}
+\rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance).
+
+\section{Application to Pseudo-Randomness}
+\label{sec:pseudo-random}
+\subsection{A First Pseudo-Random Number Generator}
+
+We have proposed in~\cite{bgw09:ip} a new family of generators that receives
+two PRNGs as inputs. These two generators are mixed with chaotic iterations,
+leading thus to a new PRNG that improves the statistical properties of each
+generator taken alone. Furthermore, our generator
+possesses various chaos properties that none of the generators used as input
+present.
+
+\begin{algorithm}[h!]
+%\begin{scriptsize}
+\KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$
+($n$ bits)}
+\KwOut{a configuration $x$ ($n$ bits)}
+$x\leftarrow x^0$\;
+$k\leftarrow b + \textit{XORshift}(b)$\;
+\For{$i=0,\dots,k$}
+{
+$s\leftarrow{\textit{XORshift}(n)}$\;
+$x\leftarrow{F_f(s,x)}$\;
+}
+return $x$\;
+%\end{scriptsize}
+\caption{PRNG with chaotic functions}
+\label{CI Algorithm}
+\end{algorithm}
+
+\begin{algorithm}[h!]
+\KwIn{the internal configuration $z$ (a 32-bit word)}
+\KwOut{$y$ (a 32-bit word)}
+$z\leftarrow{z\oplus{(z\ll13)}}$\;
+$z\leftarrow{z\oplus{(z\gg17)}}$\;
+$z\leftarrow{z\oplus{(z\ll5)}}$\;
+$y\leftarrow{z}$\;
+return $y$\;
+\medskip
+\caption{An arbitrary round of \textit{XORshift} algorithm}
+\label{XORshift}
+\end{algorithm}
+
+
+
+
+
+This generator is synthesized in Algorithm~\ref{CI Algorithm}.
+It takes as input: a function $f$;
+an integer $b$, ensuring that the number of executed iterations is at least $b$
+and at most $2b+1$; and an initial configuration $x^0$.
+It returns the new generated configuration $x$. Internally, it embeds two
+\textit{XORshift}$(k)$ PRNGs \cite{Marsaglia2003} that returns integers
+uniformly distributed
+into $\llbracket 1 ; k \rrbracket$.
+\textit{XORshift} is a category of very fast PRNGs designed by George Marsaglia,
+which repeatedly uses the transform of exclusive or (XOR, $\oplus$) on a number
+with a bit shifted version of it. This PRNG, which has a period of
+$2^{32}-1=4.29\times10^9$, is summed up in Algorithm~\ref{XORshift}. It is used
+in our PRNG to compute the strategy length and the strategy elements.
+
+
+We have proven in \cite{bcgr11:ip} that,
+\begin{theorem}
+ Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its
+ iteration graph, $\check{M}$ its adjacency
+ matrix and $M$ a $n\times n$ matrix defined as in the previous lemma.
+ If $\Gamma(f)$ is strongly connected, then
+ the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows
+ a law that tends to the uniform distribution
+ if and only if $M$ is a double stochastic matrix.
+\end{theorem}
+
+This former generator as successively passed various batteries of statistical tests, as the NIST tests~\cite{bcgr11:ip}.
+
+\subsection{Improving the Speed of the Former Generator}
+
+Instead of updating only one cell at each iteration, we can try to choose a
+subset of components and to update them together. Such an attempt leads
+to a kind of merger of the two sequences used in Algorithm
+\ref{CI Algorithm}. When the updating function is the vectorial negation,
+this algorithm can be rewritten as follows:
+
+\begin{equation}
+\left\{
+\begin{array}{l}
+x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
+\forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus S^n,
+\end{array}
+\right.
+\label{equation Oplus}
+\end{equation}
+where $\oplus$ is for the bitwise exclusive or between two integers.
+This rewritten can be understood as follows. The $n-$th term $S^n$ of the
+sequence $S$, which is an integer of $\mathsf{N}$ binary digits, presents
+the list of cells to update in the state $x^n$ of the system (represented
+as an integer having $\mathsf{N}$ bits too). More precisely, the $k-$th
+component of this state (a binary digit) changes if and only if the $k-$th
+digit in the binary decomposition of $S^n$ is 1.
+
+The single basic component presented in Eq.~\ref{equation Oplus} is of
+ordinary use as a good elementary brick in various PRNGs. It corresponds
+to the following discrete dynamical system in chaotic iterations:
+
+\begin{equation}
+\forall n\in \mathds{N}^{\ast }, \forall i\in
+\llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
+\begin{array}{ll}
+ x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
+ \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
+\end{array}\right.
+\label{eq:generalIC}
+\end{equation}
+where $f$ is the vectorial negation and $\forall n \in \mathds{N}$,
+$\mathcal{S}^n \subset \llbracket 1, \mathsf{N} \rrbracket$ is such that
+$k \in \mathcal{S}^n$ if and only if the $k-$th digit in the binary
+decomposition of $S^n$ is 1. Such chaotic iterations are more general
+than the ones presented in Definition \ref{Def:chaotic iterations} for
+the fact that, instead of updating only one term at each iteration,
+we select a subset of components to change.
+
+
+Obviously, replacing Algorithm~\ref{CI Algorithm} by
+Equation~\ref{equation Oplus}, possible when the iteration function is
+the vectorial negation, leads to a speed improvement. However, proofs
+of chaos obtained in~\cite{bg10:ij} have been established
+only for chaotic iterations of the form presented in Definition
+\ref{Def:chaotic iterations}. The question is now to determine whether the
+use of more general chaotic iterations to generate pseudo-random numbers
+faster, does not deflate their topological chaos properties.
+
+\subsection{Proofs of Chaos of the General Formulation of the Chaotic Iterations}
+\label{deuxième def}
+Let us consider the discrete dynamical systems in chaotic iterations having
+the general form:
+
+\begin{equation}
+\forall n\in \mathds{N}^{\ast }, \forall i\in
+\llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
+\begin{array}{ll}
+ x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
+ \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
+\end{array}\right.
+\label{general CIs}
+\end{equation}
+
+In other words, at the $n^{th}$ iteration, only the cells whose id is
+contained into the set $S^{n}$ are iterated.
+
+Let us now rewrite these general chaotic iterations as usual discrete dynamical
+system of the form $X^{n+1}=f(X^n)$ on an ad hoc metric space. Such a formulation
+is required in order to study the topological behavior of the system.
+
+Let us introduce the following function:
+\begin{equation}
+\begin{array}{cccc}
+ \chi: & \llbracket 1; \mathsf{N} \rrbracket \times \mathcal{P}\left(\llbracket 1; \mathsf{N} \rrbracket\right) & \longrightarrow & \mathds{B}\\
+ & (i,X) & \longmapsto & \left\{ \begin{array}{ll} 0 & \textrm{if }i \notin X, \\ 1 & \textrm{if }i \in X, \end{array}\right.
+\end{array}
+\end{equation}
+where $\mathcal{P}\left(X\right)$ is for the powerset of the set $X$, that is, $Y \in \mathcal{P}\left(X\right) \Longleftrightarrow Y \subset X$.
+
+Given a function $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, define the function:
+\begin{equation}
+\begin{array}{lrll}
+F_{f}: & \mathcal{P}\left(\llbracket1;\mathsf{N}\rrbracket \right) \times \mathds{B}^{\mathsf{N}} &
+\longrightarrow & \mathds{B}^{\mathsf{N}} \\
+& (P,E) & \longmapsto & \left( E_{j}.\chi (j,P)+f(E)_{j}.\overline{\chi
+(j,P)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},%
+\end{array}%
+\end{equation}%
+where + and . are the Boolean addition and product operations, and $\overline{x}$
+is the negation of the Boolean $x$.
+Consider the phase space:
+\begin{equation}
+\mathcal{X} = \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N} \times
+\mathds{B}^\mathsf{N},
+\end{equation}
+\noindent and the map defined on $\mathcal{X}$:
+\begin{equation}
+G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), \label{Gf}
+\end{equation}
+\noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
+(S^{n})_{n\in \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow (S^{n+1})_{n\in
+\mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}$ and $i$ is the \emph{initial function}
+$i:(S^{n})_{n\in \mathds{N}} \in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow S^{0}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)$.
+Then the general chaotic iterations defined in Equation \ref{general CIs} can
+be described by the following discrete dynamical system:
+\begin{equation}
+\left\{
+\begin{array}{l}
+X^0 \in \mathcal{X} \\
+X^{k+1}=G_{f}(X^k).%
+\end{array}%
+\right.
+\end{equation}%
+
+Another time, a shift function appears as a component of these general chaotic
+iterations.
+
+To study the Devaney's chaos property, a distance between two points
+$X = (S,E), Y = (\check{S},\check{E})$ of $\mathcal{X}$ must be defined.
+Let us introduce:
+\begin{equation}
+d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
+\label{nouveau d}
+\end{equation}
+\noindent where
+\begin{equation}
+\left\{
+\begin{array}{lll}
+\displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
+}\delta (E_{k},\check{E}_{k})}\textrm{ is another time the Hamming distance}, \\
+\displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
+\sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}.%
+\end{array}%
+\right.
+\end{equation}
+where $|X|$ is the cardinality of a set $X$ and $A\Delta B$ is for the symmetric difference, defined for sets A, B as
+$A\,\Delta\,B = (A \setminus B) \cup (B \setminus A)$.
+
+
+\begin{proposition}
+The function $d$ defined in Eq.~\ref{nouveau d} is a metric on $\mathcal{X}$.
+\end{proposition}
+
+\begin{proof}
+ $d_e$ is the Hamming distance. We will prove that $d_s$ is a distance
+too, thus $d$ will be a distance as sum of two distances.
+ \begin{itemize}
+\item Obviously, $d_s(S,\check{S})\geqslant 0$, and if $S=\check{S}$, then
+$d_s(S,\check{S})=0$. Conversely, if $d_s(S,\check{S})=0$, then
+$\forall k \in \mathds{N}, |S^k\Delta {S}^k|=0$, and so $\forall k, S^k=\check{S}^k$.
+ \item $d_s$ is symmetric
+($d_s(S,\check{S})=d_s(\check{S},S)$) due to the commutative property
+of the symmetric difference.
+\item Finally, $|S \Delta S''| = |(S \Delta \varnothing) \Delta S''|= |S \Delta (S'\Delta S') \Delta S''|= |(S \Delta S') \Delta (S' \Delta S'')|\leqslant |S \Delta S'| + |S' \Delta S''|$,
+and so for all subsets $S,S',$ and $S''$ of $\llbracket 1, \mathsf{N} \rrbracket$,
+we have $d_s(S,S'') \leqslant d_e(S,S')+d_s(S',S'')$, and the triangle
+inequality is obtained.
+ \end{itemize}
+\end{proof}
+
+
+Before being able to study the topological behavior of the general
+chaotic iterations, we must firstly establish that:
+
+\begin{proposition}
+ For all $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, the function $G_f$ is continuous on
+$\left( \mathcal{X},d\right)$.
+\end{proposition}
+
+
+\begin{proof}
+We use the sequential continuity.
+Let $(S^n,E^n)_{n\in \mathds{N}}$ be a sequence of the phase space $%
+\mathcal{X}$, which converges to $(S,E)$. We will prove that $\left(
+G_{f}(S^n,E^n)\right) _{n\in \mathds{N}}$ converges to $\left(
+G_{f}(S,E)\right) $. Let us remark that for all $n$, $S^n$ is a strategy,
+thus, we consider a sequence of strategies (\emph{i.e.}, a sequence of
+sequences).\newline
+As $d((S^n,E^n);(S,E))$ converges to 0, each distance $d_{e}(E^n,E)$ and $d_{s}(S^n,S)$ converges
+to 0. But $d_{e}(E^n,E)$ is an integer, so $\exists n_{0}\in \mathds{N},$ $%
+d_{e}(E^n,E)=0$ for any $n\geqslant n_{0}$.\newline
+In other words, there exists a threshold $n_{0}\in \mathds{N}$ after which no
+cell will change its state:
+$\exists n_{0}\in \mathds{N},n\geqslant n_{0}\Rightarrow E^n = E.$
+
+In addition, $d_{s}(S^n,S)\longrightarrow 0,$ so $\exists n_{1}\in %
+\mathds{N},d_{s}(S^n,S)<10^{-1}$ for all indexes greater than or equal to $%
+n_{1}$. This means that for $n\geqslant n_{1}$, all the $S^n$ have the same
+first term, which is $S^0$: $\forall n\geqslant n_{1},S_0^n=S_0.$
+
+Thus, after the $max(n_{0},n_{1})^{th}$ term, states of $E^n$ and $E$ are
+identical and strategies $S^n$ and $S$ start with the same first term.\newline
+Consequently, states of $G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are equal,
+so, after the $max(n_0, n_1)^{th}$ term, the distance $d$ between these two points is strictly less than 1.\newline
+\noindent We now prove that the distance between $\left(
+G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is convergent to
+0. Let $\varepsilon >0$. \medskip
+\begin{itemize}
+\item If $\varepsilon \geqslant 1$, we see that distance
+between $\left( G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is
+strictly less than 1 after the $max(n_{0},n_{1})^{th}$ term (same state).
+\medskip
+\item If $\varepsilon <1$, then $\exists k\in \mathds{N},10^{-k}\geqslant
+\varepsilon > 10^{-(k+1)}$. But $d_{s}(S^n,S)$ converges to 0, so
+\begin{equation*}
+\exists n_{2}\in \mathds{N},\forall n\geqslant
+n_{2},d_{s}(S^n,S)<10^{-(k+2)},
+\end{equation*}%
+thus after $n_{2}$, the $k+2$ first terms of $S^n$ and $S$ are equal.
+\end{itemize}
+\noindent As a consequence, the $k+1$ first entries of the strategies of $%
+G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are the same ($G_{f}$ is a shift of strategies) and due to the definition of $d_{s}$, the floating part of
+the distance between $(S^n,E^n)$ and $(S,E)$ is strictly less than $%
+10^{-(k+1)}\leqslant \varepsilon $.\bigskip \newline
+In conclusion,
+$$
+\forall \varepsilon >0,\exists N_{0}=max(n_{0},n_{1},n_{2})\in \mathds{N}%
+,\forall n\geqslant N_{0},
+ d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right)
+\leqslant \varepsilon .
+$$
+$G_{f}$ is consequently continuous.
+\end{proof}
+
+
+It is now possible to study the topological behavior of the general chaotic
+iterations. We will prove that,
+
+\begin{theorem}
+\label{t:chaos des general}
+ The general chaotic iterations defined on Equation~\ref{general CIs} satisfy
+the Devaney's property of chaos.
+\end{theorem}
+
+Let us firstly prove the following lemma.
+
+\begin{lemma}[Strong transitivity]
+\label{strongTrans}
+ For all couples $X,Y \in \mathcal{X}$ and any neighborhood $V$ of $X$, we can
+find $n \in \mathds{N}^*$ and $X' \in V$ such that $G^n(X')=Y$.
+\end{lemma}
+
+\begin{proof}
+ Let $X=(S,E)$, $\varepsilon>0$, and $k_0 = \lfloor log_{10}(\varepsilon)+1 \rfloor$.
+Any point $X'=(S',E')$ such that $E'=E$ and $\forall k \leqslant k_0, S'^k=S^k$,
+are in the open ball $\mathcal{B}\left(X,\varepsilon\right)$. Let us define
+$\check{X} = \left(\check{S},\check{E}\right)$, where $\check{X}= G^{k_0}(X)$.
+We denote by $s\subset \llbracket 1; \mathsf{N} \rrbracket$ the set of coordinates
+that are different between $\check{E}$ and the state of $Y$. Thus each point $X'$ of
+the form $(S',E')$ where $E'=E$ and $S'$ starts with
+$(S^0, S^1, \hdots, S^{k_0},s,\hdots)$, verifies the following properties:
+\begin{itemize}
+ \item $X'$ is in $\mathcal{B}\left(X,\varepsilon\right)$,
+ \item the state of $G_f^{k_0+1}(X')$ is the state of $Y$.
+\end{itemize}
+Finally the point $\left(\left(S^0, S^1, \hdots, S^{k_0},s,s^0, s^1, \hdots\right); E\right)$,
+where $(s^0,s^1, \hdots)$ is the strategy of $Y$, satisfies the properties
+claimed in the lemma.
+\end{proof}
+
+We can now prove the Theorem~\ref{t:chaos des general}...
+
+\begin{proof}[Theorem~\ref{t:chaos des general}]
+Firstly, strong transitivity implies transitivity.
+
+Let $(S,E) \in\mathcal{X}$ and $\varepsilon >0$. To
+prove that $G_f$ is regular, it is sufficient to prove that
+there exists a strategy $\tilde S$ such that the distance between
+$(\tilde S,E)$ and $(S,E)$ is less than $\varepsilon$, and such that
+$(\tilde S,E)$ is a periodic point.
+
+Let $t_1=\lfloor-\log_{10}(\varepsilon)\rfloor$, and let $E'$ be the
+configuration that we obtain from $(S,E)$ after $t_1$ iterations of
+$G_f$. As $G_f$ is strongly transitive, there exists a strategy $S'$
+and $t_2\in\mathds{N}$ such
+that $E$ is reached from $(S',E')$ after $t_2$ iterations of $G_f$.
+
+Consider the strategy $\tilde S$ that alternates the first $t_1$ terms
+of $S$ and the first $t_2$ terms of $S'$: $$\tilde
+S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It
+is clear that $(\tilde S,E)$ is obtained from $(\tilde S,E)$ after
+$t_1+t_2$ iterations of $G_f$. So $(\tilde S,E)$ is a periodic
+point. Since $\tilde S_t=S_t$ for $t<t_1$, by the choice of $t_1$, we
+have $d((S,E),(\tilde S,E))<\epsilon$.
+\end{proof}
+
+
+
+\section{Efficient PRNG based on Chaotic Iterations}
+\label{sec:efficient prng}
+
+In order to implement efficiently a PRNG based on chaotic iterations it is
+possible to improve previous works [ref]. One solution consists in considering
+that the strategy used contains all the bits for which the negation is
+achieved out. Then in order to apply the negation on these bits we can simply
+apply the xor operator between the current number and the strategy. In
+order to obtain the strategy we also use a classical PRNG.
+
+Here is an example with 16-bits numbers showing how the bitwise operations
+are
+applied. Suppose that $x$ and the strategy $S^i$ are defined in binary mode.
+Then the following table shows the result of $x$ xor $S^i$.
+$$
+\begin{array}{|cc|cccccccccccccccc|}
+\hline
+x &=&1&0&1&1&1&0&1&0&1&0&0&1&0&0&1&0\\
+\hline
+S^i &=&0&1&1&0&0&1&1&0&1&1&1&0&0&1&1&1\\
+\hline
+x \oplus S^i&=&1&1&0&1&1&1&0&0&0&1&1&1&0&1&0&1\\
+\hline
+
+\hline
+ \end{array}
+$$
+
+%% \begin{figure}[htbp]
+%% \begin{center}
+%% \fbox{
+%% \begin{minipage}{14cm}
+%% unsigned int CIprng() \{\\
+%% static unsigned int x = 123123123;\\
+%% unsigned long t1 = xorshift();\\
+%% unsigned long t2 = xor128();\\
+%% unsigned long t3 = xorwow();\\
+%% x = x\textasciicircum (unsigned int)t1;\\
+%% x = x\textasciicircum (unsigned int)(t2$>>$32);\\
+%% x = x\textasciicircum (unsigned int)(t3$>>$32);\\
+%% x = x\textasciicircum (unsigned int)t2;\\
+%% x = x\textasciicircum (unsigned int)(t1$>>$32);\\
+%% x = x\textasciicircum (unsigned int)t3;\\
+%% return x;\\
+%% \}
+%% \end{minipage}
+%% }
+%% \end{center}
+%% \caption{sequential Chaotic Iteration PRNG}
+%% \label{algo:seqCIprng}
+%% \end{figure}
+
+
+
+\lstset{language=C,caption={C code of the sequential chaotic iterations based
+PRNG},label=algo:seqCIprng}
+\begin{lstlisting}
+unsigned int CIprng() {
+ static unsigned int x = 123123123;
+ unsigned long t1 = xorshift();
+ unsigned long t2 = xor128();
+ unsigned long t3 = xorwow();
+ x = x^(unsigned int)t1;
+ x = x^(unsigned int)(t2>>32);
+ x = x^(unsigned int)(t3>>32);
+ x = x^(unsigned int)t2;
+ x = x^(unsigned int)(t1>>32);
+ x = x^(unsigned int)t3;
+ return x;
+}
+\end{lstlisting}
+
+
+
+
+
+In listing~\ref{algo:seqCIprng} a sequential version of our chaotic iterations
+based PRNG is presented. The xor operator is represented by \textasciicircum.
+This function uses three classical 64-bits PRNG: the \texttt{xorshift}, the
+\texttt{xor128} and the \texttt{xorwow}. In the following, we call them
+xor-like PRNGSs. These three PRNGs are presented in~\cite{Marsaglia2003}. As
+each xor-like PRNG used works with 64-bits and as our PRNG works with 32-bits,
+the use of \texttt{(unsigned int)} selects the 32 least significant bits whereas
+\texttt{(unsigned int)(t3$>>$32)} selects the 32 most significants bits of the
+variable \texttt{t}. So to produce a random number realizes 6 xor operations
+with 6 32-bits numbers produced by 3 64-bits PRNG. This version successes the
+BigCrush of the TestU01 battery~\cite{LEcuyerS07}.
+
+\section{Efficient PRNGs based on chaotic iterations on GPU}
+\label{sec:efficient prng gpu}
+
+In order to benefit from computing power of GPU, a program needs to define
+independent blocks of threads which can be computed simultaneously. In general,
+the larger the number of threads is, the more local memory is used and the less
+branching instructions are used (if, while, ...), the better performance is
+obtained on GPU. So with algorithm \ref{algo:seqCIprng} presented in the
+previous section, it is possible to build a similar program which computes PRNG
+on GPU. In the CUDA~\cite{Nvid10} environment, threads have a local
+identificator, called \texttt{ThreadIdx} relative to the block containing them.
+
+
+\subsection{Naive version for GPU}
+
+From the CPU version, it is possible to obtain a quite similar version for GPU.
+The principe consists in assigning the computation of a PRNG as in sequential to
+each thread of the GPU. Of course, it is essential that the three xor-like
+PRNGs used for our computation have different parameters. So we chose them
+randomly with another PRNG. As the initialisation is performed by the CPU, we
+have chosen to use the ISAAC PRNG~\ref{Jenkins96} to initalize all the
+parameters for the GPU version of our PRNG. The implementation of the three
+xor-like PRNGs is straightforward as soon as their parameters have been
+allocated in the GPU memory. Each xor-like PRNGs used works with an internal
+number $x$ which keeps the last generated random numbers. Other internal
+variables are also used by the xor-like PRNGs. More precisely, the
+implementation of the xor128, the xorshift and the xorwow respectively require
+4, 5 and 6 unsigned long as internal variables.
+
+\begin{algorithm}
+
+\KwIn{InternalVarXorLikeArray: array with internal variables of the 3 xor-like
+PRNGs in global memory\;
+NumThreads: Number of threads\;}
+\KwOut{NewNb: array containing random numbers in global memory}
+\If{threadIdx is concerned by the computation} {
+ retrieve data from InternalVarXorLikeArray[threadIdx] in local variables\;
+ \For{i=1 to n} {
+ compute a new PRNG as in Listing\ref{algo:seqCIprng}\;
+ store the new PRNG in NewNb[NumThreads*threadIdx+i]\;
+ }
+ store internal variables in InternalVarXorLikeArray[threadIdx]\;
+}
+
+\caption{main kernel for the chaotic iterations based PRNG GPU naive version}
+\label{algo:gpu_kernel}
+\end{algorithm}
+
+Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of PRNG using
+GPU. According to the available memory in the GPU and the number of threads
+used simultenaously, the number of random numbers that a thread can generate
+inside a kernel is limited, i.e. the variable \texttt{n} in
+algorithm~\ref{algo:gpu_kernel}. For example, if $100,000$ threads are used and
+if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)}
+then the memory required to store internals variables of xor-like
+PRNGs\footnote{we multiply this number by $2$ in order to count 32-bits numbers}
+and random number of our PRNG is equals to $100,000\times ((4+5+6)\times
+2+(1+100))=1,310,000$ 32-bits numbers, i.e. about $52$Mb.
+
+All the tests performed to pass the BigCrush of TestU01 succeeded. Different
+number of threads, called \texttt{NumThreads} in our algorithm, have been tested
+upto $10$ millions.
+
+\begin{remark}
+Algorithm~\ref{algo:gpu_kernel} has the advantage to manipulate independent
+PRNGs, so this version is easily usable on a cluster of computer. The only thing
+to ensure is to use a single ISAAC PRNG. For this, a simple solution consists in
+using a master node for the initialization which computes the initial parameters
+for all the differents nodes involves in the computation.
+\end{remark}
+
+\subsection{Improved version for GPU}
+
+As GPU cards using CUDA have shared memory between threads of the same block, it
+is possible to use this feature in order to simplify the previous algorithm,
+i.e., using less than 3 xor-like PRNGs. The solution consists in computing only
+one xor-like PRNG by thread, saving it into shared memory and using the results
+of some other threads in the same block of threads. In order to define which
+thread uses the result of which other one, we can use a permutation array which
+contains the indexes of all threads and for which a permutation has been
+performed. In Algorithm~\ref{algo:gpu_kernel2}, 2 permutations arrays are used.
+The variable \texttt{offset} is computed using the value of
+\texttt{permutation\_size}. Then we can compute \texttt{o1} and \texttt{o2}
+which represent the indexes of the other threads for which the results are used
+by the current thread. In the algorithm, we consider that a 64-bits xor-like
+PRNG is used, that is why both 32-bits parts are used.
+
+This version also succeeds to the {\it BigCrush} batteries of tests.
+
+\begin{algorithm}
+
+\KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs
+in global memory\;
+NumThreads: Number of threads\;
+tab1, tab2: Arrays containing permutations of size permutation\_size\;}
+
+\KwOut{NewNb: array containing random numbers in global memory}
+\If{threadId is concerned} {
+ retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory\;
+ offset = threadIdx\%permutation\_size\;
+ o1 = threadIdx-offset+tab1[offset]\;
+ o2 = threadIdx-offset+tab2[offset]\;
+ \For{i=1 to n} {
+ t=xor-like()\;
+ t=t$\oplus$shmem[o1]$\oplus$shmem[o2]\;
+ shared\_mem[threadId]=t\;
+ x = x $\oplus$ t\;
+
+ store the new PRNG in NewNb[NumThreads*threadId+i]\;
+ }
+ store internal variables in InternalVarXorLikeArray[threadId]\;
+}
+
+\caption{main kernel for the chaotic iterations based PRNG GPU efficient
+version}
+\label{algo:gpu_kernel2}
+\end{algorithm}
+
+\subsection{Theoretical Evaluation of the Improved Version}
+
+A run of Algorithm~\ref{algo:gpu_kernel2} consists in three operations having
+the form of Equation~\ref{equation Oplus}, which is equivalent to the iterative
+system of Eq.~\ref{eq:generalIC}. That is, three iterations of the general chaotic
+iterations are realized between two stored values of the PRNG.
+To be certain that we are in the framework of Theorem~\ref{t:chaos des general},
+we must guarantee that this dynamical system iterates on the space
+$\mathcal{X} = \mathcal{P}\left(\llbracket 1, \mathsf{N} \rrbracket\right)^\mathds{N}\times\mathds{B}^\mathsf{N}$.
+The left term $x$ obviously belongs into $\mathds{B}^ \mathsf{N}$.
+To prevent from any flaws of chaotic properties, we must check that each right
+term, corresponding to terms of the strategies, can possibly be equal to any
+integer of $\llbracket 1, \mathsf{N} \rrbracket$.
+
+Such a result is obvious for the two first lines, as for the xor-like(), all the
+integers belonging into its interval of definition can occur at each iteration.
+It can be easily stated for the two last lines by an immediate mathematical
+induction.
+
+Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general
+chaotic iterations presented previously, and for this reason, it satisfies the
+Devaney's formulation of a chaotic behavior.
+
+\section{Experiments}
+\label{sec:experiments}
+
+Different experiments have been performed in order to measure the generation
+speed. We have used a computer equiped with Tesla C1060 NVidia GPU card and an
+Intel Xeon E5530 cadenced at 2.40 GHz for our experiments and we have used
+another one equipped with a less performant CPU and a GeForce GTX 280. Both
+cards have 240 cores.
+
+In Figure~\ref{fig:time_gpu} we compare the number of random numbers generated
+per second. The xor-like prng is a xor64 described in~\cite{Marsaglia2003}. In
+order to obtain the optimal performance we remove the storage of random numbers
+in the GPU memory. This step is time consumming and slows down the random number
+generation. Moreover, if you are interested by applications that consome random
+numbers directly when they are generated, their storage is completely
+useless. In this figure we can see that when the number of threads is greater
+than approximately 30,000 upto 5 millions the number of random numbers generated
+per second is almost constant. With the naive version, it is between 2.5 and
+3GSample/s. With the optimized version, it is approximately equals to
+20GSample/s. Finally we can remark that both GPU cards are quite similar. In
+practice, the Tesla C1060 has more memory than the GTX 280 and this memory
+should be of better quality.
+
+\begin{figure}[htbp]
+\begin{center}
+ \includegraphics[scale=.7]{curve_time_gpu.pdf}
+\end{center}
+\caption{Number of random numbers generated per second}
+\label{fig:time_gpu}
+\end{figure}
+
+
+In comparison, Listing~\ref{algo:seqCIprng} allows us to generate about
+138MSample/s with only one core of the Xeon E5530.
+
+