X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/prng_gpu.git/blobdiff_plain/a595bc795f31e05fc7fcc8415e1549bdda84b076..3139e5ae44965e6ea8f37d04f84687189d4b5c42:/prng_gpu.tex?ds=sidebyside diff --git a/prng_gpu.tex b/prng_gpu.tex index 730b620..8d76043 100644 --- a/prng_gpu.tex +++ b/prng_gpu.tex @@ -667,16 +667,30 @@ Indeed this result is weaker than the theorem establishing the chaos for the fin \section{Efficient prng based on chaotic iterations} -On parle du séquentiel avec des nombres 64 bits\\ - - In order to implement efficiently a PRNG based on chaotic iterations it is possible to improve previous works [ref]. One solution consists in considering that the strategy used contains all the bits for which the negation is -achieved out. Then instead of applying the negation on these bits we can simply +achieved out. Then in order to apply the negation on these bits we can simply apply the xor operator between the current number and the strategy. In order to obtain the strategy we also use a classical PRNG. +Here is an example with 16-bits numbers showing how the bit operations are +applied. Suppose that $x$ and the strategy $S^i$ are defined in binary mode. +Then the following table shows the result of $x$ xor $S^i$. +$$ +\begin{array}{|cc|cccccccccccccccc|} +\hline +x &=&1&0&1&1&1&0&1&0&1&0&0&1&0&0&1&0\\ +\hline +S^i &=&0&1&1&0&0&1&1&0&1&1&1&0&0&1&1&1\\ +\hline +x \oplus S^i&=&1&1&0&1&1&1&0&0&0&1&1&1&0&1&0&1\\ +\hline + +\hline + \end{array} +$$ + %% \begin{figure}[htbp] %% \begin{center} %% \fbox{ @@ -725,7 +739,7 @@ unsigned int CIprng() { In listing~\ref{algo:seqCIprng} a sequential version of our chaotic iterations -based PRNG is presented. This version uses three classical 64-bits PRNG: the +based PRNG is presented. The xor operator is represented by \textasciicircum. This function uses three classical 64-bits PRNG: the \texttt{xorshift}, the \texttt{xor128} and the \texttt{xorwow}. In the following, we call them xor-like PRNGSs. These three PRNGs are presented in~\cite{Marsaglia2003}. As each xor-like PRNG used works with 64-bits and as our PRNG @@ -744,12 +758,18 @@ the larger the number of threads is, the more local memory is used and the less branching instructions are used (if, while, ...), the better performance is obtained on GPU. So with algorithm \ref{algo:seqCIprng} presented in the previous section, it is possible to build a similar program which computes PRNG -on GPU. The principe consists in assigning the computation of a PRNG as in -sequential to each thread of the GPU. Of course, it is essential that the three -xor-like PRNGs used for our computation have different parameters. So we chose -them randomly with another PRNG. As the initialisation is performed by the CPU, -we have chosen to use the ISAAC PRNG to initalize all the parameters for the GPU -version of our PRNG. The implementation of the three xor-like PRNGs is +on GPU. + + +\subsection{Naive version} + +From the CPU version, it is possible to obtain a quite similar version for GPU. +The principe consists in assigning the computation of a PRNG as in sequential to +each thread of the GPU. Of course, it is essential that the three xor-like +PRNGs used for our computation have different parameters. So we chose them +randomly with another PRNG. As the initialisation is performed by the CPU, we +have chosen to use the ISAAC PRNG [ref] to initalize all the parameters for the +GPU version of our PRNG. The implementation of the three xor-like PRNGs is straightforward as soon as their parameters have been allocated in the GPU memory. Each xor-like PRNGs used works with an internal number $x$ which keeps the last generated random numbers. Other internal variables are also used by the @@ -785,11 +805,34 @@ PRNGs\footnote{we multiply this number by $2$ in order to count 32-bits numbers} and random number of our PRNG is equals to $100,000\times ((4+5+6)\times 2+(1+100))=1,310,000$ 32-bits numbers, i.e. about $52$Mb. +All the tests performed to pass the BigCrush of TestU01 succeeded. Different +number of threads have been tested upto $10$ millions. + +\begin{remark} +Algorithm~\ref{algo:gpu_kernel} has the advantage to manipulate independent +PRNGs, so this version is easily usable on a cluster of computer. The only thing +to ensure is to use a single ISAAC PRNG. For this, a simple solution consists in +using a master node for the initialization which computes the initial parameters +for all the differents nodes involves in the computation. +\end{remark} + +\subsection{Version more suited to GPU} + +As GPU offers shared memory mechanism between threads of the same block, it is +possible to use this in order to simplify the previous algorithm, i.e. using +less than 3 xor-like PRNGs. The solution consists in + + threads of the same block compute a random +number and uses other random numbers of + \section{Experiments} -On passe le BigCrush\\ -On donne des temps de générations sur GPU/CPU\\ -On donne des temps de générations de nombre sur GPU puis on rappatrie sur CPU / CPU ? bof bof, on verra +Differents experiments have been performed in order to measure the generation speed. + + +First of all we have compared the time to generate X random numbers with both the CPU version and the GPU version. + +Faire une courbe du nombre de random en fonction du nombre de threads, éventuellement en fonction du nombres de threads par bloc. \section{Conclusion}