+\KwOut{NewNb: array containing random numbers in global memory}
+\If{threadIdx is concerned by the computation} {
+ retrieve data from InternalVarXorLikeArray[threadIdx] in local variables\;
+ \For{i=1 to n} {
+ compute a new PRNG as in Listing\ref{algo:seqCIprng}\;
+ store the new PRNG in NewNb[NumThreads*threadIdx+i]\;
+ }
+ store internal variables in InternalVarXorLikeArray[threadIdx]\;
+}
+
+\caption{main kernel for the chaotic iterations based PRNG GPU naive version}
+\label{algo:gpu_kernel}
+\end{algorithm}
+
+Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of PRNG using
+GPU. According to the available memory in the GPU and the number of threads
+used simultenaously, the number of random numbers that a thread can generate
+inside a kernel is limited, i.e. the variable \texttt{n} in
+algorithm~\ref{algo:gpu_kernel}. For example, if $100,000$ threads are used and
+if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)}
+then the memory required to store internals variables of xor-like
+PRNGs\footnote{we multiply this number by $2$ in order to count 32-bits numbers}
+and random number of our PRNG is equals to $100,000\times ((4+5+6)\times
+2+(1+100))=1,310,000$ 32-bits numbers, i.e. about $52$Mb.
+
+All the tests performed to pass the BigCrush of TestU01 succeeded. Different
+number of threads, called \texttt{NumThreads} in our algorithm, have been tested
+upto $10$ millions.
+
+\begin{remark}
+Algorithm~\ref{algo:gpu_kernel} has the advantage to manipulate independent
+PRNGs, so this version is easily usable on a cluster of computer. The only thing
+to ensure is to use a single ISAAC PRNG. For this, a simple solution consists in
+using a master node for the initialization which computes the initial parameters
+for all the differents nodes involves in the computation.
+\end{remark}
+
+\subsection{Improved version for GPU}
+
+As GPU cards using CUDA have shared memory between threads of the same block, it
+is possible to use this feature in order to simplify the previous algorithm,
+i.e. using less than 3 xor-like PRNGs. The solution consists in computing only
+one xor-like PRNG by thread, saving it into shared memory and using the results
+of some other threads in the same block of threads. In order to define which
+thread uses the result of which other one, we can use a permutation array which
+contains the indexes of all threads and for which a permutation has been
+performed. In Algorithm~\ref{algo:gpu_kernel2}, 2 permutations arrays are used.
+The variable \texttt{offset} is computed using the value of
+\texttt{permutation\_size}. Then we can compute \texttt{o1} and \texttt{o2}
+which represent the indexes of the other threads for which the results are used
+by the current thread. In the algorithm, we consider that a 64-bits xor-like
+PRNG is used, that is why both 32-bits parts are used.
+
+This version also succeed to the BigCrush batteries of tests.
+
+\begin{algorithm}
+
+\KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs in global memory\;
+NumThreads: Number of threads\;
+tab1, tab2: Arrays containing permutations of size permutation\_size\;}
+