also provide an efficient PRNG for GPU respecting based on IC. Such devices
allows us to generated almost 20 billions of random numbers per second.
-In order to establish
+In order to establish that our PRNGs are chaotic according to the Devaney's
+formulation, we extend what we have proposed in~\cite{guyeux10}. Moreover, we define a new distance to measure the disorder in the chaos and we prove some interesting properties with this distance.
The rest of this paper is organised as follows. In Section~\ref{section:related
- works} we review some GPU implementions of PRNG. Section~\ref{sec:chaotic
- iterations} gives some basic recalls on Devanay's formation of chaos and
+ works} we review some GPU implementions of PRNG. Section~\ref{section:BASIC RECALLS} gives some basic recalls on Devanay's formation of chaos and
chaotic iterations. In Section~\ref{sec:pseudo-random} the proof of chaos of our
PRNGs is studied. Section~\ref{sec:efficient prng} presents an efficient
implementation of our chaotic PRNG on a CPU. Section~\ref{sec:efficient prng
branching instructions are used (if, while, ...), the better performance is
obtained on GPU. So with algorithm \ref{algo:seqCIprng} presented in the
previous section, it is possible to build a similar program which computes PRNG
-on GPU. In the CUDA [ref] environment, threads have a local identificator,
-called \texttt{ThreadIdx} relative to the block containing them.
+on GPU. In the CUDA~\cite{Nvid10} environment, threads have a local
+identificator, called \texttt{ThreadIdx} relative to the block containing them.
\subsection{Naive version for GPU}
each thread of the GPU. Of course, it is essential that the three xor-like
PRNGs used for our computation have different parameters. So we chose them
randomly with another PRNG. As the initialisation is performed by the CPU, we
-have chosen to use the ISAAC PRNG [ref] to initalize all the parameters for the
-GPU version of our PRNG. The implementation of the three xor-like PRNGs is
-straightforward as soon as their parameters have been allocated in the GPU
-memory. Each xor-like PRNGs used works with an internal number $x$ which keeps
-the last generated random numbers. Other internal variables are also used by the
-xor-like PRNGs. More precisely, the implementation of the xor128, the xorshift
-and the xorwow respectively require 4, 5 and 6 unsigned long as internal
-variables.
+have chosen to use the ISAAC PRNG~\ref{Jenkins96} to initalize all the
+parameters for the GPU version of our PRNG. The implementation of the three
+xor-like PRNGs is straightforward as soon as their parameters have been
+allocated in the GPU memory. Each xor-like PRNGs used works with an internal
+number $x$ which keeps the last generated random numbers. Other internal
+variables are also used by the xor-like PRNGs. More precisely, the
+implementation of the xor128, the xorshift and the xorwow respectively require
+4, 5 and 6 unsigned long as internal variables.
\begin{algorithm}
by the current thread. In the algorithm, we consider that a 64-bits xor-like
PRNG is used, that is why both 32-bits parts are used.
-This version also succeed to the BigCrush batteries of tests.
+This version also succeeds to the {\it BigCrush} batteries of tests.
\begin{algorithm}
\section{Experiments}
\label{sec:experiments}
-Different experiments have been performed in order to measure the generation
-speed.
-\begin{figure}[t]
+Different experiments have been performed in order to measure the generation
+speed. We have used a computer equiped with Tesla C1060 NVidia GPU card and an
+Intel Xeon E5530 cadenced at 2.40 GHz for our experiments and we have used
+another one equipped with a less performant CPU and a GeForce GTX 280. Both
+cards have 240 cores.
+
+In Figure~\ref{fig:time_gpu} we compare the number of random numbers generated
+per second. In order to obtain the optimal performance we remove the storage of
+random numbers in the GPU memory. This step is time consumming and slows down
+the random number generation. Moreover, if you are interested by applications
+that consome random numbers directly when they are generated, their storage is
+completely useless. In this figure we can see that when the number of threads is
+greater than approximately 30,000 upto 5 millions the number of random numbers
+generated per second is almost constant. With the naive version, it is between
+2.5 and 3GSample/s. With the optimized version, it is approximately equals to
+20GSample/s. Finally we can remark that both GPU cards are quite similar. In
+practice, the Tesla C1060 has more memory than the GTX 280 and this memory
+should be of better quality.
+
+\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=.7]{curve_time_gpu.pdf}
\end{center}
\caption{Number of random numbers generated per second}
-\label{fig:time_naive_gpu}
+\label{fig:time_gpu}
\end{figure}
-First of all we have compared the time to generate X random numbers with both
-the CPU version and the GPU version.
+In comparison, Listing~\ref{algo:seqCIprng} allows us to generate about
+138MSample/s with only one core of the Xeon E5530.
+
-Faire une courbe du nombre de random en fonction du nombre de threads,
-éventuellement en fonction du nombres de threads par bloc.