X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/prng_gpu.git/blobdiff_plain/26aa4e0cf2939580eae07e8bbad140d527f47234..cdea906ee17a7138b88a76de59946faec4d948ce:/prng_gpu.tex diff --git a/prng_gpu.tex b/prng_gpu.tex index 0279f03..5ebe0ef 100644 --- a/prng_gpu.tex +++ b/prng_gpu.tex @@ -884,16 +884,19 @@ stringent BigCrush battery of tests~\cite{LEcuyerS07}. \section{Efficient PRNGs based on Chaotic Iterations on GPU} \label{sec:efficient PRNG gpu} -In order to take benefits from the computing power of GPU, a program needs to have -independent blocks of threads that can be computed simultaneously. In general, -the larger the number of threads is, the more local memory is used, and the less -branching instructions are used (if, while, ...), the better the performances on GPU is. -Obviously, having these requirements in mind, it is possible to build a program similar to -the one presented in Algorithm \ref{algo:seqCIPRNG}, which computes pseudorandom numbers -on GPU. -To do so, we must firstly recall that in - the CUDA~\cite{Nvid10} environment, threads have a local -identifier called \texttt{ThreadIdx}, which is relative to the block containing them. +In order to take benefits from the computing power of GPU, a program +needs to have independent blocks of threads that can be computed +simultaneously. In general, the larger the number of threads is, the +more local memory is used, and the less branching instructions are +used (if, while, ...), the better the performances on GPU is. +Obviously, having these requirements in mind, it is possible to build +a program similar to the one presented in Algorithm +\ref{algo:seqCIPRNG}, which computes pseudorandom numbers on GPU. To +do so, we must firstly recall that in the CUDA~\cite{Nvid10} +environment, threads have a local identifier called +\texttt{ThreadIdx}, which is relative to the block containing +them. With CUDA parts of the code which are executed by the GPU are +called {\it kernels}. \subsection{Naive Version for GPU} @@ -965,13 +968,13 @@ is possible to use this feature in order to simplify the previous algorithm, i.e., to use less than 3 xor-like PRNGs. The solution consists in computing only one xor-like PRNG by thread, saving it into the shared memory, and then to use the results of some other threads in the same block of threads. In order to define which -thread uses the result of which other one, we can use a permutation array that -contains the indexes of all threads and for which a permutation has been +thread uses the result of which other one, we can use a combination array that +contains the indexes of all threads and for which a combination has been performed. -In Algorithm~\ref{algo:gpu_kernel2}, two permutations arrays are used. +In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used. The variable \texttt{offset} is computed using the value of -\texttt{permutation\_size}. Then we can compute \texttt{o1} and \texttt{o2} +\texttt{combination\_size}. Then we can compute \texttt{o1} and \texttt{o2} representing the indexes of the other threads whose results are used by the current one. In this algorithm, we consider that a 64-bits xor-like PRNG has been chosen, and so its two 32-bits parts are used. @@ -983,19 +986,19 @@ This version also can pass the whole {\it BigCrush} battery of tests. \KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs in global memory\; NumThreads: Number of threads\; -tab1, tab2: Arrays containing permutations of size permutation\_size\;} +tab1, tab2: Arrays containing combinations of size combination\_size\;} \KwOut{NewNb: array containing random numbers in global memory} \If{threadId is concerned} { retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory and x\; - offset = threadIdx\%permutation\_size\; + offset = threadIdx\%combination\_size\; o1 = threadIdx-offset+tab1[offset]\; o2 = threadIdx-offset+tab2[offset]\; \For{i=1 to n} { t=xor-like()\; - t=t$\oplus$shmem[o1]$\oplus$shmem[o2]\; + t=t $\hat{ }$ shmem[o1] $\hat{ }$ shmem[o2]\; shared\_mem[threadId]=t\; - x = x $\oplus$ t\; + x = x $\hat{ }$ t\; store the new PRNG in NewNb[NumThreads*threadId+i]\; } @@ -1079,7 +1082,7 @@ As a comparison, Listing~\ref{algo:seqCIPRNG} leads to the generation of In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized BBS-based PRNG on GPU. On the Tesla C1060 we -obtain approximately 1.8GSample/s and on the GTX 280 about 1.6GSample/s, which is +obtain approximately 700MSample/s and on the GTX 280 about 670MSample/s, which is obviously slower than the xorlike-based PRNG on GPU. However, we will show in the next sections that this new PRNG has a strong level of security, which is necessary paid by a speed @@ -1232,22 +1235,142 @@ prime numbers need to be congruent to 3 modulus 4. BBS is very slow and only usable for cryptographic applications. -The modulus operation is the most time consuming operation for -current GPU cards. -So in order to obtain quite reasonable performances, it is required -to use only modulus on 32 bits integer numbers. Consequently $x_n^2$ need to be -less than $2^{32}$ and the number $M$ need to be less than $2^{16}$. So in -practice we can choose prime numbers around 256 that are congruent to 3 modulus -4. With 32 bits numbers, only the 4 least significant bits of $x_n$ can be -chosen (the maximum number of indistinguishable bits is lesser than or equals to -$log_2(log_2(x_n))$). So to generate a 32 bits number, we need to use 8 times -the BBS algorithm with different combinations of $M$. +The modulus operation is the most time consuming operation for current +GPU cards. So in order to obtain quite reasonable performances, it is +required to use only modulus on 32 bits integer numbers. Consequently +$x_n^2$ need to be less than $2^{32}$ and the number $M$ need to be +less than $2^{16}$. So in practice we can choose prime numbers around +256 that are congruent to 3 modulus 4. With 32 bits numbers, only the +4 least significant bits of $x_n$ can be chosen (the maximum number of +indistinguishable bits is lesser than or equals to +$log_2(log_2(x_n))$). So to generate a 32 bits number, we need to use +8 times the BBS algorithm with different combinations of $M$. This +approach is not sufficient to pass all the tests of TestU01 because +the fact of having chosen small values of $M$ for the BBS leads to +have a small period. So, in order to add randomness we proceed with +the followings modifications. +\begin{itemize} +\item +First we define 16 arrangement arrays instead of 2 (as described in +algorithm \ref{algo:gpu_kernel2}) but only 2 are used at each call of +the PRNG kernels. In practice, the selection of which combinations +arrays will be used is different for all the threads and is determined +by using the three last bits of two internal variables used by BBS. +This approach adds more randomness. In algorithm~\ref{algo:bbs_gpu}, +character \& performs the AND bitwise. So using \&7 with a number +gives the last 3 bits, so it provides a number between 0 and 7. +\item +Second, after the generation of the 8 BBS numbers for each thread we +have a 32 bits number for which the period is possibly quite small. So +to add randomness, we generate 4 more BBS numbers which allows us to +shift the 32 bits numbers and add upto 6 new bits. This part is +described in algorithm~\ref{algo:bbs_gpu}. In practice, if we call +{\it strategy}, the number representing the strategy, the last 2 bits +of the first new BBS number are used to make a left shift of at least +3 bits. The last 3 bits of the second new BBS number are add to the +strategy whatever the value of the first left shift. The third and the +fourth new BBS numbers are used similarly to apply a new left shift +and add 3 new bits. +\item +Finally, as we use 8 BBS numbers for each thread, the store of these +numbers at the end of the kernel is performed using a rotation. So, +internal variable for BBS number 1 is stored in place 2, internal +variable for BBS number 2 is store ind place 3, ... and internal +variable for BBS number 8 is stored in place 1. +\end{itemize} + + +\begin{algorithm} + +\KwIn{InternalVarBBSArray: array with internal variables of the 8 BBS +in global memory\; +NumThreads: Number of threads\; +tab: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\;} + +\KwOut{NewNb: array containing random numbers in global memory} +\If{threadId is concerned} { + retrieve data from InternalVarBBSArray[threadId] in local variables including shared memory and x\; + we consider that bbs1 ... bbs8 represent the internal states of the 8 BBS numbers\; + offset = threadIdx\%combination\_size\; + o1 = threadIdx-offset+tab[bbs1\&7][offset]\; + o2 = threadIdx-offset+tab[8+bbs2\&7][offset]\; + \For{i=1 to n} { + t<<=4\; + t|=BBS1(bbs1)\&15\; + ...\; + t<<=4\; + t|=BBS8(bbs8)\&15\; + //two new shifts\; + t<<=BBS3(bbs3)\&3\; + t|=BBS1(bbs1)\&7\; + t<<=BBS7(bbs7)\&3\; + t|=BBS2(bbs2)\&7\; + t=t $\hat{ }$ shmem[o1] $\hat{ }$ shmem[o2]\; + shared\_mem[threadId]=t\; + x = x $\hat{ }$ t\; + + store the new PRNG in NewNb[NumThreads*threadId+i]\; + } + store internal variables in InternalVarXorLikeArray[threadId] using a rotation\; +} + +\caption{main kernel for the BBS based PRNG GPU} +\label{algo:bbs_gpu} +\end{algorithm} + +In algorithm~\ref{algo:bbs_gpu}, t<<=4 performs a left shift of 4 bits +on the variable t and stores the result in t. BBS1(bbs1)\&15 selects +the last four bits of the result of BBS1. It should be noticed that +for the two new shifts, we use arbitrarily 4 BBSs that have previously +been used. + + + +\subsection{A Cryptographically Secure and Chaotic Asymetric Cryptosystem} + +\subsubsection{Recalls of the Blum-Goldwasser Probabilistic Cryptosystem} + +The Blum-Goldwasser cryptosystem is a cryptographically secure asymmetric key encryption algorithm +proposed in 1984~\cite{Blum:1985:EPP:19478.19501}. The encryption algorithm +implements an XOR-based stream cipher using the BBS PRNG, in order to generate +the keystream. Decryption is done by obtaining the initial seed thanks to +the final state of the BBS generator and the secret key, thus leading to the + reconstruction of the keystream. + +The key generation consists in generating two prime numbers $(p,q)$, +randomly and independently of each other, that are + congruent to 3 mod 4, and to compute the modulus $N=pq$. +The public key is $N$, whereas the secret key is the factorization $(p,q)$. + + +Suppose Bob wishes to send a string $m=(m_0, \dots, m_{L-1})$ of $L$ bits to Alice: +\begin{enumerate} +\item Bob picks an integer $r$ randomly in the interval $[1,N$ and computes $x_0 = r^2~mod~N$. +\item He uses the BBS to generate the keystream of $L$ pseudorandom bits $(b_0, \dots, b_{L-1})$, as follows. For $i=0$ to $L-1$, +\begin{itemize} +\item $i=0$. +\item While $i \leqslant L-1$: +\begin{itemize} +\item Set $b_i$ equal to the least-significant\footnote{BBS can securely output up to O(loglogN) of the least-significant bits of xi during each round.} bit of $x_i$, +\item $i=i+1$, +\item $x_i = (x_{i-1})^2~mod~N.$ +\end{itemize} +\end{itemize} +\item The ciphertext is computed by XORing the plaintext bits $m$ with the keystream: $ c = (c_0, \dots, c_{L-1}) = m \oplus b$. +\end{enumerate} +The ciphertext is $(c, y)$, where $y=x_{0}^{2^{L}}~mod~N.$. -Currently this PRNG does not succeed to pass all the tests of TestU01. +When Alice receives $(c_0, \dots, c_{L-1}), y$, she can recover $m$ as follows: +\begin{enumerate} +\item Using the secret key $(p,q)$, she computes $r_p = y^{((p+1)/4)^{L}}~mod~p$ and $r_q = y^{((q+1)/4)^{L}}~mod~q$. +\item The initial seed can be obtained using the following procedure: $x_0=q(q^{-1}~{mod}~p)r_p + p(p^{-1}~{mod}~q)r_q~{mod}~N$ +\item Recompute the bit-vector $b$ by using BBS and $x_0$. +\item Compute finally the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$. +\end{enumerate} -\subsection{A Secure Asymetric Cryptosystem} +\subsubsection{Proposal of a new Asymmetric Cryptosystem Adapted from Blum-Goldwasser} @@ -1263,8 +1386,7 @@ generate a huge number of pseudorandom numbers per second (about 20Gsamples/s). This PRNG succeeds to pass the hardest batteries of TestU01. In future work we plan to extend this work for parallel PRNG for clusters or -grid computing. We also plan to improve the BBS version in order to succeed all -the tests of TestU01. +grid computing.