X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/prng_gpu.git/blobdiff_plain/bce6fe373543bc9037b3de8504d9599882257bf5..559dbc1685c4f30e9ff30ebb4609965f421c697c:/prng_gpu.tex diff --git a/prng_gpu.tex b/prng_gpu.tex index 11dd246..5b07118 100644 --- a/prng_gpu.tex +++ b/prng_gpu.tex @@ -34,7 +34,7 @@ \newcommand{\alert}[1]{\begin{color}{blue}\textit{#1}\end{color}} -\title{Efficient Generation of Pseudo-Random Bumbers based on Chaotic Iterations +\title{Efficient Generation of Pseudo-Random Numbers based on Chaotic Iterations on GPU} \begin{document} @@ -44,16 +44,99 @@ Guyeux\thanks{Authors in alphabetic order}} \maketitle \begin{abstract} -This is the abstract +In this paper we present a new produce pseudo-random numbers generator (PRNG) on +graphics processing units (GPU). This PRNG is based on chaotic iterations. it +is proven to be chaotic in the Devany's formulation. We propose an efficient +implementation for GPU which succeeds to the {\it BigCrush}, the hardest +batteries of test of TestU01. Experimentations show that this PRNG can generate +about 20 billions of random numbers per second on Tesla C1060 and NVidia GTX280 +cards. + + \end{abstract} \section{Introduction} -Interet des itérations chaotiques pour générer des nombre alea\\ -Interet de générer des nombres alea sur GPU -\alert{RC, un petit state-of-the-art sur les PRNGs sur GPU ?} -... - +Random numbers are used in many scientific applications and simulations. On +finite state machines, as computers, it is not possible to generate random +numbers but only pseudo-random numbers. In practice, a good pseudo-random number +generator (PRNG) needs to verify some features to be used by scientists. It is +important to be able to generate pseudo-random numbers efficiently, the +generation needs to be reproducible and a PRNG needs to satisfy many usual +statistical properties. Finally, from our point a view, it is essential to prove +that a PRNG is chaotic. Concerning the statistical tests, TestU01 is the +best-known public-domain statistical testing package. So we use it for all our +PRNGs, especially the {\it BigCrush} which provides the largest serie of tests. +Concerning the chaotic properties, Devaney~\cite{Devaney} proposed a common +mathematical formulation of chaotic dynamical systems. + +In a previous work~\cite{bgw09:ip} we have proposed a new familly of chaotic +PRNG based on chaotic iterations. We have proven that these PRNGs are +chaotic in the Devaney's sense. In this paper we propose a faster version which +is also proven to be chaotic. + +Although graphics processing units (GPU) was initially designed to accelerate +the manipulation of images, they are nowadays commonly used in many scientific +applications. Therefore, it is important to be able to generate pseudo-random +numbers inside a GPU when a scientific application runs in a GPU. That is why we +also provide an efficient PRNG for GPU respecting based on IC. Such devices +allows us to generated almost 20 billions of random numbers per second. + +In order to establish that our PRNGs are chaotic according to the Devaney's +formulation, we extend what we have proposed in~\cite{guyeux10}. Moreover, we +define a new distance to measure the disorder in the chaos and we prove some +interesting properties with this distance. + +The rest of this paper is organised as follows. In Section~\ref{section:related + works} we review some GPU implementions of PRNG. Section~\ref{section:BASIC + RECALLS} gives some basic recalls on Devanay's formation of chaos and chaotic +iterations. In Section~\ref{sec:pseudo-random} the proof of chaos of our PRNGs +is studied. Section~\ref{sec:efficient prng} presents an efficient +implementation of our chaotic PRNG on a CPU. Section~\ref{sec:efficient prng + gpu} describes the GPU implementation of our chaotic PRNG. In +Section~\ref{sec:experiments} some experimentations are presented. +Section~\ref{sec:de la relativité du désordre} describes the relativity of +disorder. In Section~\ref{sec: chaos order topology} the proof that chaotic +iterations can be described by iterations on a real interval is +established. Finally, we give a conclusion and some perspectives. + + + + +\section{Related works on GPU based PRNGs} +\label{section:related works} +In the litterature many authors have work on defining GPU based PRNGs. We do not +want to be exhaustive and we just give the most significant works from our point +of view. When authors mention the number of random numbers generated per second +we mention it. We consider that a million numbers per second corresponds to +1MSample/s and than a billion numbers per second corresponds to 1GSample/s. + +In \cite{Pang:2008:cec}, the authors define a PRNG based on cellular automata +which does not require high precision integer arithmetics nor bitwise +operations. There is no mention of statistical tests nor proof that this PRNG is +chaotic. Concerning the speed of generation, they can generate about +3.2MSample/s on a GeForce 7800 GTX GPU (which is quite old now). + +In \cite{ZRKB10}, the authors propose different versions of efficient GPU PRNGs +based on Lagged Fibonacci, Hybrid Taus or Hybrid Taus. They have used these +PRNGs for Langevin simulations of biomolecules fully implemented on +GPU. Performance of the GPU versions are far better than those obtained with a +CPU and these PRNGs succeed to pass the {\it BigCrush} test of TestU01. There is +no mention that their PRNGs have chaos mathematical properties. + + +Authors of~\cite{conf/fpga/ThomasHL09} have studied the implementation of some +PRNGs on diferrent computing architectures: CPU, field-programmable gate array +(FPGA), GPU and massively parallel processor. This study is interesting because +it shows the performance of the same PRNGs on different architeture. For +example, the FPGA is globally the fastest architecture and it is also the +efficient one because it provides the fastest number of generated random numbers +per joule. Concerning the GPU, authors can generate betweend 11 and 16GSample/s +with a GTX 280 GPU. The drawback of this work is that those PRNGs only succeed +the {\it Crush} test which is easier than the {\it Big Crush} test. +\newline +\newline +To the best of our knowledge no GPU implementation have been proven to have chaotic properties. \section{Basic Recalls} \label{section:BASIC RECALLS} @@ -280,7 +363,7 @@ $\mathds{B}^\mathsf{N}$ represents the memory of the computer whereas $\llbracke \rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance). \section{Application to Pseudo-Randomness} - +\label{sec:pseudo-random} \subsection{A First Pseudo-Random Number Generator} We have proposed in~\cite{bgw09:ip} a new family of generators that receives @@ -389,6 +472,7 @@ to the following discrete dynamical system in chaotic iterations: x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\ \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n. \end{array}\right. +\label{eq:generalIC} \end{equation} where $f$ is the vectorial negation and $\forall n \in \mathds{N}$, $\mathcal{S}^n \subset \llbracket 1, \mathsf{N} \rrbracket$ is such that @@ -409,7 +493,7 @@ use of more general chaotic iterations to generate pseudo-random numbers faster, does not deflate their topological chaos properties. \subsection{Proofs of Chaos of the General Formulation of the Chaotic Iterations} - +\label{deuxième def} Let us consider the discrete dynamical systems in chaotic iterations having the general form: @@ -623,15 +707,33 @@ claimed in the lemma. We can now prove the Theorem~\ref{t:chaos des general}... \begin{proof}[Theorem~\ref{t:chaos des general}] - On the one hand, strong transitivity implies transitivity. On the other hand, -the regularity is exactly Lemma~\ref{strongTrans} with $Y=X$. As the sensitivity -to the initial condition is implied by these two properties, we thus have -the theorem. +Firstly, strong transitivity implies transitivity. + +Let $(S,E) \in\mathcal{X}$ and $\varepsilon >0$. To +prove that $G_f$ is regular, it is sufficient to prove that +there exists a strategy $\tilde S$ such that the distance between +$(\tilde S,E)$ and $(S,E)$ is less than $\varepsilon$, and such that +$(\tilde S,E)$ is a periodic point. + +Let $t_1=\lfloor-\log_{10}(\varepsilon)\rfloor$, and let $E'$ be the +configuration that we obtain from $(S,E)$ after $t_1$ iterations of +$G_f$. As $G_f$ is strongly transitive, there exists a strategy $S'$ +and $t_2\in\mathds{N}$ such +that $E$ is reached from $(S',E')$ after $t_2$ iterations of $G_f$. + +Consider the strategy $\tilde S$ that alternates the first $t_1$ terms +of $S$ and the first $t_2$ terms of $S'$: $$\tilde +S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It +is clear that $(\tilde S,E)$ is obtained from $(\tilde S,E)$ after +$t_1+t_2$ iterations of $G_f$. So $(\tilde S,E)$ is a periodic +point. Since $\tilde S_t=S_t$ for $t>$32);\\ -%% x = x\textasciicircum (unsigned int)(t3$>>$32);\\ -%% x = x\textasciicircum (unsigned int)t2;\\ -%% x = x\textasciicircum (unsigned int)(t1$>>$32);\\ -%% x = x\textasciicircum (unsigned int)t3;\\ -%% return x;\\ -%% \} -%% \end{minipage} -%% } -%% \end{center} -%% \caption{sequential Chaotic Iteration PRNG} -%% \label{algo:seqCIprng} -%% \end{figure} + @@ -707,19 +787,19 @@ unsigned int CIprng() { In listing~\ref{algo:seqCIprng} a sequential version of our chaotic iterations -based PRNG is presented. The xor operator is represented by -\textasciicircum. This function uses three classical 64-bits PRNG: the -\texttt{xorshift}, the \texttt{xor128} and the \texttt{xorwow}. In the -following, we call them xor-like PRNGSs. These three PRNGs are presented -in~\cite{Marsaglia2003}. As each xor-like PRNG used works with 64-bits and as -our PRNG works with 32-bits, the use of \texttt{(unsigned int)} selects the 32 -least significant bits whereas \texttt{(unsigned int)(t3$>>$32)} selects the 32 -most significants bits of the variable \texttt{t}. So to produce a random -number realizes 6 xor operations with 6 32-bits numbers produced by 3 64-bits -PRNG. This version successes the BigCrush of the TestU01 battery [P. L’ecuyer - and R. Simard. Testu01]. - -\section{Efficient prng based on chaotic iterations on GPU} +based PRNG is presented. The xor operator is represented by \textasciicircum. +This function uses three classical 64-bits PRNG: the \texttt{xorshift}, the +\texttt{xor128} and the \texttt{xorwow}. In the following, we call them +xor-like PRNGSs. These three PRNGs are presented in~\cite{Marsaglia2003}. As +each xor-like PRNG used works with 64-bits and as our PRNG works with 32-bits, +the use of \texttt{(unsigned int)} selects the 32 least significant bits whereas +\texttt{(unsigned int)(t3$>>$32)} selects the 32 most significants bits of the +variable \texttt{t}. So to produce a random number realizes 6 xor operations +with 6 32-bits numbers produced by 3 64-bits PRNG. This version successes the +BigCrush of the TestU01 battery~\cite{LEcuyerS07}. + +\section{Efficient PRNGs based on chaotic iterations on GPU} +\label{sec:efficient prng gpu} In order to benefit from computing power of GPU, a program needs to define independent blocks of threads which can be computed simultaneously. In general, @@ -727,8 +807,8 @@ the larger the number of threads is, the more local memory is used and the less branching instructions are used (if, while, ...), the better performance is obtained on GPU. So with algorithm \ref{algo:seqCIprng} presented in the previous section, it is possible to build a similar program which computes PRNG -on GPU. In the CUDA [ref] environment, threads have a local identificator, -called \texttt{ThreadIdx} relative to the block containing them. +on GPU. In the CUDA~\cite{Nvid10} environment, threads have a local +identificator, called \texttt{ThreadIdx} relative to the block containing them. \subsection{Naive version for GPU} @@ -738,14 +818,14 @@ The principe consists in assigning the computation of a PRNG as in sequential to each thread of the GPU. Of course, it is essential that the three xor-like PRNGs used for our computation have different parameters. So we chose them randomly with another PRNG. As the initialisation is performed by the CPU, we -have chosen to use the ISAAC PRNG [ref] to initalize all the parameters for the -GPU version of our PRNG. The implementation of the three xor-like PRNGs is -straightforward as soon as their parameters have been allocated in the GPU -memory. Each xor-like PRNGs used works with an internal number $x$ which keeps -the last generated random numbers. Other internal variables are also used by the -xor-like PRNGs. More precisely, the implementation of the xor128, the xorshift -and the xorwow respectively require 4, 5 and 6 unsigned long as internal -variables. +have chosen to use the ISAAC PRNG~\cite{Jenkins96} to initalize all the +parameters for the GPU version of our PRNG. The implementation of the three +xor-like PRNGs is straightforward as soon as their parameters have been +allocated in the GPU memory. Each xor-like PRNGs used works with an internal +number $x$ which keeps the last generated random numbers. Other internal +variables are also used by the xor-like PRNGs. More precisely, the +implementation of the xor128, the xorshift and the xorwow respectively require +4, 5 and 6 unsigned long as internal variables. \begin{algorithm} @@ -793,7 +873,7 @@ for all the differents nodes involves in the computation. As GPU cards using CUDA have shared memory between threads of the same block, it is possible to use this feature in order to simplify the previous algorithm, -i.e. using less than 3 xor-like PRNGs. The solution consists in computing only +i.e., using less than 3 xor-like PRNGs. The solution consists in computing only one xor-like PRNG by thread, saving it into shared memory and using the results of some other threads in the same block of threads. In order to define which thread uses the result of which other one, we can use a permutation array which @@ -805,7 +885,7 @@ which represent the indexes of the other threads for which the results are used by the current thread. In the algorithm, we consider that a 64-bits xor-like PRNG is used, that is why both 32-bits parts are used. -This version also succeed to the BigCrush batteries of tests. +This version also succeeds to the {\it BigCrush} batteries of tests. \begin{algorithm} @@ -816,17 +896,15 @@ tab1, tab2: Arrays containing permutations of size permutation\_size\;} \KwOut{NewNb: array containing random numbers in global memory} \If{threadId is concerned} { - retrieve data from InternalVarXorLikeArray[threadId] in local variables\; + retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory\; offset = threadIdx\%permutation\_size\; o1 = threadIdx-offset+tab1[offset]\; o2 = threadIdx-offset+tab2[offset]\; \For{i=1 to n} { t=xor-like()\; - shared\_mem[threadId]=(unsigned int)t\; - x = x $\oplus$ (unsigned int) t\; - x = x $\oplus$ (unsigned int) (t>>32)\; - x = x $\oplus$ shared[o1]\; - x = x $\oplus$ shared[o2]\; + t=t$\oplus$shmem[o1]$\oplus$shmem[o2]\; + shared\_mem[threadId]=t\; + x = x $\oplus$ t\; store the new PRNG in NewNb[NumThreads*threadId+i]\; } @@ -838,32 +916,74 @@ version} \label{algo:gpu_kernel2} \end{algorithm} - +\subsection{Theoretical Evaluation of the Improved Version} + +A run of Algorithm~\ref{algo:gpu_kernel2} consists in three operations having +the form of Equation~\ref{equation Oplus}, which is equivalent to the iterative +system of Eq.~\ref{eq:generalIC}. That is, three iterations of the general chaotic +iterations are realized between two stored values of the PRNG. +To be certain that we are in the framework of Theorem~\ref{t:chaos des general}, +we must guarantee that this dynamical system iterates on the space +$\mathcal{X} = \mathcal{P}\left(\llbracket 1, \mathsf{N} \rrbracket\right)^\mathds{N}\times\mathds{B}^\mathsf{N}$. +The left term $x$ obviously belongs into $\mathds{B}^ \mathsf{N}$. +To prevent from any flaws of chaotic properties, we must check that each right +term, corresponding to terms of the strategies, can possibly be equal to any +integer of $\llbracket 1, \mathsf{N} \rrbracket$. + +Such a result is obvious for the two first lines, as for the xor-like(), all the +integers belonging into its interval of definition can occur at each iteration. +It can be easily stated for the two last lines by an immediate mathematical +induction. + +Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general +chaotic iterations presented previously, and for this reason, it satisfies the +Devaney's formulation of a chaotic behavior. \section{Experiments} - -Differents experiments have been performed in order to measure the generation -speed. -\begin{figure}[t] +\label{sec:experiments} + +Different experiments have been performed in order to measure the generation +speed. We have used a computer equiped with Tesla C1060 NVidia GPU card and an +Intel Xeon E5530 cadenced at 2.40 GHz for our experiments and we have used +another one equipped with a less performant CPU and a GeForce GTX 280. Both +cards have 240 cores. + +In Figure~\ref{fig:time_gpu} we compare the number of random numbers generated +per second. The xor-like prng is a xor64 described in~\cite{Marsaglia2003}. In +order to obtain the optimal performance we remove the storage of random numbers +in the GPU memory. This step is time consumming and slows down the random number +generation. Moreover, if you are interested by applications that consome random +numbers directly when they are generated, their storage is completely +useless. In this figure we can see that when the number of threads is greater +than approximately 30,000 upto 5 millions the number of random numbers generated +per second is almost constant. With the naive version, it is between 2.5 and +3GSample/s. With the optimized version, it is approximately equals to +20GSample/s. Finally we can remark that both GPU cards are quite similar. In +practice, the Tesla C1060 has more memory than the GTX 280 and this memory +should be of better quality. + +\begin{figure}[htbp] \begin{center} \includegraphics[scale=.7]{curve_time_gpu.pdf} \end{center} \caption{Number of random numbers generated per second} -\label{fig:time_naive_gpu} +\label{fig:time_gpu} \end{figure} -First of all we have compared the time to generate X random numbers with both -the CPU version and the GPU version. +In comparison, Listing~\ref{algo:seqCIprng} allows us to generate about +138MSample/s with only one core of the Xeon E5530. + -Faire une courbe du nombre de random en fonction du nombre de threads, -éventuellement en fonction du nombres de threads par bloc. \section{The relativity of disorder} \label{sec:de la relativité du désordre} +In the next two sections, we investigate the impact of the choices that have +lead to the definitions of measures in Sections \ref{sec:chaotic iterations} and \ref{deuxième def}. + \subsection{Impact of the topology's finenesse} Let us firstly introduce the following notations. @@ -969,7 +1089,7 @@ sets $A$ and $B$, an integer $n$ must satisfy $f^{(n)}(A) \cap B \neq \section{Chaos on the order topology} - +\label{sec: chaos order topology} \subsection{The phase space is an interval of the real line} \subsubsection{Toward a topological semiconjugacy}