X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/prng_gpu.git/blobdiff_plain/cdea906ee17a7138b88a76de59946faec4d948ce..8b2ff8fffab74015439e592d520567aec9568d61:/prng_gpu.tex?ds=inline diff --git a/prng_gpu.tex b/prng_gpu.tex index 5ebe0ef..0a88df5 100644 --- a/prng_gpu.tex +++ b/prng_gpu.tex @@ -7,7 +7,7 @@ \usepackage{amscd} \usepackage{moreverb} \usepackage{commath} -\usepackage{algorithm2e} +\usepackage[ruled,vlined]{algorithm2e} \usepackage{listings} \usepackage[standard]{ntheorem} @@ -50,8 +50,9 @@ implementation for GPU that successfully passes the {\it BigCrush} tests, de battery of tests in TestU01. Experiments show that this PRNG can generate about 20 billions of random numbers per second on Tesla C1060 and NVidia GTX280 cards. -It is finally established that, under reasonable assumptions, the proposed PRNG can be cryptographically +It is then established that, under reasonable assumptions, the proposed PRNG can be cryptographically secure. +A chaotic version of the Blum-Goldwasser asymmetric key encryption scheme is finally proposed. \end{abstract} @@ -80,7 +81,7 @@ sequence. Finally, a small part of the community working in this domain focus on a third requirement, that is to define chaotic generators. The main idea is to take benefits from a chaotic dynamical system to obtain a -generator that is unpredictable, disordered, sensible to its seed, or in other words chaotic. +generator that is unpredictable, disordered, sensible to its seed, or in other word chaotic. Their desire is to map a given chaotic dynamics into a sequence that seems random and unassailable due to chaos. However, the chaotic maps used as a pattern are defined in the real line @@ -131,19 +132,21 @@ numbers inside a GPU when a scientific application runs in it. This remark motivates our proposal of a chaotic and statistically perfect PRNG for GPU. Such device allows us to generated almost 20 billions of pseudorandom numbers per second. -Last, but not least, we show that the proposed post-treatment preserves the +Furthermore, we show that the proposed post-treatment preserves the cryptographical security of the inputted PRNG, when this last has such a property. +Last, but not least, we propose a rewritten of the Blum-Goldwasser asymmetric +key encryption protocol by using the proposed method. The remainder of this paper is organized as follows. In Section~\ref{section:related works} we review some GPU implementations of PRNGs. Section~\ref{section:BASIC RECALLS} gives some basic recalls on the well-known Devaney's formulation of chaos, and on an iteration process called ``chaotic iterations'' on which the post-treatment is based. -Proofs of chaos are given in Section~\ref{sec:pseudorandom}. +The proposed PRNG and its proof of chaos are given in Section~\ref{sec:pseudorandom}. Section~\ref{sec:efficient PRNG} presents an efficient implementation of this chaotic PRNG on a CPU, whereas Section~\ref{sec:efficient PRNG - gpu} describes the GPU implementation. + gpu} describes and evaluates theoretically the GPU implementation. Such generators are experimented in Section~\ref{sec:experiments}. We show in Section~\ref{sec:security analysis} that, if the inputted @@ -151,7 +154,8 @@ generator is cryptographically secure, then it is the case too for the generator provided by the post-treatment. Such a proof leads to the proposition of a cryptographically secure and chaotic generator on GPU based on the famous Blum Blum Shum -in Section~\ref{sec:CSGPU}. +in Section~\ref{sec:CSGPU}, and to an improvement of the +Blum-Goldwasser protocol in Sect.~\ref{Blum-Goldwasser}. This research work ends by a conclusion section, in which the contribution is summarized and intended future work is presented. @@ -192,7 +196,7 @@ the performance of the same PRNGs on different architectures are compared. FPGA appears as the fastest and the most efficient architecture, providing the fastest number of generated pseudorandom numbers per joule. -However, we can notice that authors can ``only'' generate between 11 and 16GSamples/s +However, we notice that authors can ``only'' generate between 11 and 16GSamples/s with a GTX 280 GPU, which should be compared with the results presented in this document. We can remark too that the PRNGs proposed in~\cite{conf/fpga/ThomasHL09} are only @@ -212,7 +216,10 @@ We can finally remark that, to the best of our knowledge, no GPU implementation \label{section:BASIC RECALLS} This section is devoted to basic definitions and terminologies in the fields of -topological chaos and chaotic iterations. +topological chaos and chaotic iterations. We assume the reader is familiar +with basic notions on topology (see for instance~\cite{Devaney}). + + \subsection{Devaney's Chaotic Dynamical Systems} In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and $V_{i}$ @@ -225,7 +232,7 @@ Consider a topological space $(\mathcal{X},\tau)$ and a continuous function $f : \mathcal{X} \rightarrow \mathcal{X}$. \begin{definition} -$f$ is said to be \emph{topologically transitive} if, for any pair of open sets +The function $f$ is said to be \emph{topologically transitive} if, for any pair of open sets $U,V \subset \mathcal{X}$, there exists $k>0$ such that $f^k(U) \cap V \neq \varnothing$. \end{definition} @@ -244,7 +251,7 @@ necessarily the same period). \begin{definition}[Devaney's formulation of chaos~\cite{Devaney}] -$f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and +The function $f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and topologically transitive. \end{definition} @@ -252,12 +259,12 @@ The chaos property is strongly linked to the notion of ``sensitivity'', defined on a metric space $(\mathcal{X},d)$ by: \begin{definition} -\label{sensitivity} $f$ has \emph{sensitive dependence on initial conditions} +\label{sensitivity} The function $f$ has \emph{sensitive dependence on initial conditions} if there exists $\delta >0$ such that, for any $x\in \mathcal{X}$ and any neighborhood $V$ of $x$, there exist $y\in V$ and $n > 0$ such that $d\left(f^{n}(x), f^{n}(y)\right) >\delta $. -$\delta$ is called the \emph{constant of sensitivity} of $f$. +The constant $\delta$ is called the \emph{constant of sensitivity} of $f$. \end{definition} Indeed, Banks \emph{et al.} have proven in~\cite{Banks92} that when $f$ is @@ -414,8 +421,7 @@ The relation between $\Gamma(f)$ and $G_f$ is clear: there exists a path from $x$ to $x'$ in $\Gamma(f)$ if and only if there exists a strategy $s$ such that the parallel iteration of $G_f$ from the initial point $(s,x)$ reaches the point $x'$. - -We have finally proven in \cite{bcgr11:ip} that, +We have then proven in \cite{bcgr11:ip} that, \begin{theorem} @@ -424,14 +430,33 @@ Let $f:\mathds{B}^\mathsf{N}\to\mathds{B}^\mathsf{N}$. $G_f$ is chaotic (accord if and only if $\Gamma(f)$ is strongly connected. \end{theorem} -This result of chaos has lead us to study the possibility to build a +Finally, we have established in \cite{bcgr11:ip} that, +\begin{theorem} + Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its + iteration graph, $\check{M}$ its adjacency + matrix and $M$ + a $n\times n$ matrix defined by + $ + M_{ij} = \frac{1}{n}\check{M}_{ij}$ %\textrm{ + if $i \neq j$ and + $M_{ii} = 1 - \frac{1}{n} \sum\limits_{j=1, j\neq i}^n \check{M}_{ij}$ otherwise. + + If $\Gamma(f)$ is strongly connected, then + the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows + a law that tends to the uniform distribution + if and only if $M$ is a double stochastic matrix. +\end{theorem} + + +These results of chaos and uniform distribution have lead us to study the possibility to build a pseudorandom number generator (PRNG) based on the chaotic iterations. As $G_f$, defined on the domain $\llbracket 1 ; \mathsf{N} \rrbracket^{\mathds{N}} \times \mathds{B}^\mathsf{N}$, is build from Boolean networks $f : \mathds{B}^\mathsf{N} \rightarrow \mathds{B}^\mathsf{N}$, we can preserve the theoretical properties on $G_f$ -during implementations (due to the discrete nature of $f$). It is as if +during implementations (due to the discrete nature of $f$). Indeed, it is as if $\mathds{B}^\mathsf{N}$ represents the memory of the computer whereas $\llbracket 1 ; \mathsf{N} \rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance, in PRNG, or a physical noise in TRNG). +Let us finally remark that the vectorial negation satisfies the hypotheses of the two theorems above. \section{Application to Pseudorandomness} \label{sec:pseudorandom} @@ -494,19 +519,7 @@ with a bit shifted version of it. This PRNG, which has a period of $2^{32}-1=4.29\times10^9$, is summed up in Algorithm~\ref{XORshift}. It is used in our PRNG to compute the strategy length and the strategy elements. - -We have proven in \cite{bcgr11:ip} that, -\begin{theorem} - Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its - iteration graph, $\check{M}$ its adjacency - matrix and $M$ a $n\times n$ matrix defined as in the previous lemma. - If $\Gamma(f)$ is strongly connected, then - the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows - a law that tends to the uniform distribution - if and only if $M$ is a double stochastic matrix. -\end{theorem} - -This former generator as successively passed various batteries of statistical tests, as the NIST~\cite{bcgr11:ip}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07}. +This former generator has successively passed various batteries of statistical tests, as the NIST~\cite{bcgr11:ip}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} ones. \subsection{Improving the Speed of the Former Generator} @@ -776,7 +789,7 @@ where $(s^0,s^1, \hdots)$ is the strategy of $Y$, satisfies the properties claimed in the lemma. \end{proof} -We can now prove the Theorem~\ref{t:chaos des general}... +We can now prove the Theorem~\ref{t:chaos des general}. \begin{proof}[Theorem~\ref{t:chaos des general}] Firstly, strong transitivity implies transitivity. @@ -846,7 +859,9 @@ $$ -\lstset{language=C,caption={C code of the sequential PRNG based on chaotic iterations},label=algo:seqCIPRNG} + +\lstset{language=C,caption={C code of the sequential PRNG based on chaotic iteration\ +s},label=algo:seqCIPRNG} \begin{lstlisting} unsigned int CIPRNG() { static unsigned int x = 123123123; @@ -866,19 +881,18 @@ unsigned int CIPRNG() { +In Listing~\ref{algo:seqCIPRNG} a sequential version of the proposed PRNG based +on chaotic iterations is presented. The xor operator is represented by +\textasciicircum. This function uses three classical 64-bits PRNGs, namely the +\texttt{xorshift}, the \texttt{xor128}, and the +\texttt{xorwow}~\cite{Marsaglia2003}. In the following, we call them ``xor-like +PRNGs''. As each xor-like PRNG uses 64-bits whereas our proposed generator +works with 32-bits, we use the command \texttt{(unsigned int)}, that selects the +32 least significant bits of a given integer, and the code \texttt{(unsigned + int)(t$>>$32)} in order to obtain the 32 most significant bits of \texttt{t}. -In Listing~\ref{algo:seqCIPRNG} a sequential version of the proposed PRNG based on chaotic iterations - is presented. The xor operator is represented by \textasciicircum. -This function uses three classical 64-bits PRNGs, namely the \texttt{xorshift}, the -\texttt{xor128}, and the \texttt{xorwow}~\cite{Marsaglia2003}. In the following, we call them -``xor-like PRNGs''. -As -each xor-like PRNG uses 64-bits whereas our proposed generator works with 32-bits, -we use the command \texttt{(unsigned int)}, that selects the 32 least significant bits of a given integer, and the code -\texttt{(unsigned int)(t3$>>$32)} in order to obtain the 32 most significant bits of \texttt{t}. - -So producing a pseudorandom number needs 6 xor operations -with 6 32-bits numbers that are provided by 3 64-bits PRNGs. This version successfully passes the +So producing a pseudorandom number needs 6 xor operations with 6 32-bits numbers +that are provided by 3 64-bits PRNGs. This version successfully passes the stringent BigCrush battery of tests~\cite{LEcuyerS07}. \section{Efficient PRNGs based on Chaotic Iterations on GPU} @@ -890,12 +904,12 @@ simultaneously. In general, the larger the number of threads is, the more local memory is used, and the less branching instructions are used (if, while, ...), the better the performances on GPU is. Obviously, having these requirements in mind, it is possible to build -a program similar to the one presented in Algorithm +a program similar to the one presented in Listing \ref{algo:seqCIPRNG}, which computes pseudorandom numbers on GPU. To do so, we must firstly recall that in the CUDA~\cite{Nvid10} environment, threads have a local identifier called \texttt{ThreadIdx}, which is relative to the block containing -them. With CUDA parts of the code which are executed by the GPU are +them. Furthermore, in CUDA, parts of the code that are executed by the GPU are called {\it kernels}. @@ -972,12 +986,14 @@ thread uses the result of which other one, we can use a combination array that contains the indexes of all threads and for which a combination has been performed. -In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used. -The variable \texttt{offset} is computed using the value of +In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used. The +variable \texttt{offset} is computed using the value of \texttt{combination\_size}. Then we can compute \texttt{o1} and \texttt{o2} -representing the indexes of the other threads whose results are used -by the current one. In this algorithm, we consider that a 64-bits xor-like -PRNG has been chosen, and so its two 32-bits parts are used. +representing the indexes of the other threads whose results are used by the +current one. In this algorithm, we consider that a 32-bits xor-like PRNG has +been chosen. In practice, we use the xor128 proposed in~\cite{Marsaglia2003} in +which unsigned longs (64 bits) have been replaced by unsigned integers (32 +bits). This version also can pass the whole {\it BigCrush} battery of tests. @@ -986,28 +1002,28 @@ This version also can pass the whole {\it BigCrush} battery of tests. \KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs in global memory\; NumThreads: Number of threads\; -tab1, tab2: Arrays containing combinations of size combination\_size\;} +array\_comb1, array\_comb2: Arrays containing combinations of size combination\_size\;} \KwOut{NewNb: array containing random numbers in global memory} \If{threadId is concerned} { retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory and x\; offset = threadIdx\%combination\_size\; - o1 = threadIdx-offset+tab1[offset]\; - o2 = threadIdx-offset+tab2[offset]\; + o1 = threadIdx-offset+array\_comb1[offset]\; + o2 = threadIdx-offset+array\_comb2[offset]\; \For{i=1 to n} { t=xor-like()\; - t=t $\hat{ }$ shmem[o1] $\hat{ }$ shmem[o2]\; + t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\; shared\_mem[threadId]=t\; - x = x $\hat{ }$ t\; + x = x\textasciicircum t\; store the new PRNG in NewNb[NumThreads*threadId+i]\; } store internal variables in InternalVarXorLikeArray[threadId]\; } -\caption{main kernel for the chaotic iterations based PRNG GPU efficient -version} -\label{algo:gpu_kernel2} +\caption{Main kernel for the chaotic iterations based PRNG GPU efficient +version\label{IR}} +\label{algo:gpu_kernel2} \end{algorithm} \subsection{Theoretical Evaluation of the Improved Version} @@ -1032,8 +1048,7 @@ last $t$ respects the requirement. Furthermore, it is possible to prove by an immediate mathematical induction that, as the initial $x$ is uniformly distributed (it is provided by a cryptographically secure PRNG), the two other stored values shmem[o1] and shmem[o2] are uniformly distributed too, -(this can be stated by an immediate mathematical -induction), and thus the next $x$ is finally uniformly distributed. +(this is the induction hypothesis), and thus the next $x$ is finally uniformly distributed. Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general chaotic iterations presented previously, and for this reason, it satisfies the @@ -1051,7 +1066,7 @@ All the cards have 240 cores. In Figure~\ref{fig:time_xorlike_gpu} we compare the quantity of pseudorandom numbers -generated per second with various xor-like based PRNG. In this figure, the optimized +generated per second with various xor-like based PRNGs. In this figure, the optimized versions use the {\it xor64} described in~\cite{Marsaglia2003}, whereas the naive versions embed the three xor-like PRNGs described in Listing~\ref{algo:seqCIPRNG}. In order to obtain the optimal performances, the storage of pseudorandom numbers @@ -1080,13 +1095,12 @@ As a comparison, Listing~\ref{algo:seqCIPRNG} leads to the generation of -In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized -BBS-based PRNG on GPU. On the Tesla C1060 we -obtain approximately 700MSample/s and on the GTX 280 about 670MSample/s, which is -obviously slower than the xorlike-based PRNG on GPU. However, we will show in the -next sections that -this new PRNG has a strong level of security, which is necessary paid by a speed -reduction. +In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized +BBS-based PRNG on GPU. On the Tesla C1060 we obtain approximately 700MSample/s +and on the GTX 280 about 670MSample/s, which is obviously slower than the +xorlike-based PRNG on GPU. However, we will show in the next sections that this +new PRNG has a strong level of security, which is necessary paid by a speed +reduction. \begin{figure}[htbp] \begin{center} @@ -1118,17 +1132,17 @@ In this section the concatenation of two strings $u$ and $v$ is classically denoted by $uv$. In a cryptographic context, a pseudorandom generator is a deterministic algorithm $G$ transforming strings into strings and such that, for any -seed $w$ of length $N$, $G(w)$ (the output of $G$ on the input $w$) has size -$\ell_G(N)$ with $\ell_G(N)>N$. +seed $m$ of length $m$, $G(m)$ (the output of $G$ on the input $m$) has size +$\ell_G(m)$ with $\ell_G(m)>m$. The notion of {\it secure} PRNGs can now be defined as follows. \begin{definition} A cryptographic PRNG $G$ is secure if for any probabilistic polynomial time algorithm $D$, for any positive polynomial $p$, and for all sufficiently -large $k$'s, -$$| \mathrm{Pr}[D(G(U_k))=1]-Pr[D(U_{\ell_G(k)})=1]|< \frac{1}{p(N)},$$ +large $m$'s, +$$| \mathrm{Pr}[D(G(U_m))=1]-Pr[D(U_{\ell_G(m)})=1]|< \frac{1}{p(m)},$$ where $U_r$ is the uniform distribution over $\{0,1\}^r$ and the -probabilities are taken over $U_N$, $U_{\ell_G(N)}$ as well as over the +probabilities are taken over $U_m$, $U_{\ell_G(m)}$ as well as over the internal coin tosses of $D$. \end{definition} @@ -1137,7 +1151,7 @@ distinguish a perfect uniform random generator from $G$ with a non negligible probability. The interested reader is referred to~\cite[chapter~3]{Goldreich} for more information. Note that it is quite easily possible to change the function $\ell$ into any polynomial -function $\ell^\prime$ satisfying $\ell^\prime(N)>N)$~\cite[Chapter 3.3]{Goldreich}. +function $\ell^\prime$ satisfying $\ell^\prime(m)>m)$~\cite[Chapter 3.3]{Goldreich}. The generation schema developed in (\ref{equation Oplus}) is based on a pseudorandom generator. Let $H$ be a cryptographic PRNG. We may assume, @@ -1230,84 +1244,87 @@ algorithm (Algorithm~\ref{algo:gpu_kernel2}). Due to Proposition~\ref{cryptopr it simply consists in replacing the {\it xor-like} PRNG by a cryptographically secure one. We have chosen the Blum Blum Shum generator~\cite{BBS} (usually denoted by BBS) having the form: -$$x_{n+1}=x_n^2~ mod~ M$$ where $M$ is the product of two prime numbers. These -prime numbers need to be congruent to 3 modulus 4. BBS is +$$x_{n+1}=x_n^2~ mod~ M$$ where $M$ is the product of two prime numbers (these +prime numbers need to be congruent to 3 modulus 4). BBS is known to be very slow and only usable for cryptographic applications. The modulus operation is the most time consuming operation for current GPU cards. So in order to obtain quite reasonable performances, it is -required to use only modulus on 32 bits integer numbers. Consequently -$x_n^2$ need to be less than $2^{32}$ and the number $M$ need to be -less than $2^{16}$. So in practice we can choose prime numbers around -256 that are congruent to 3 modulus 4. With 32 bits numbers, only the +required to use only modulus on 32-bits integer numbers. Consequently +$x_n^2$ need to be lesser than $2^{32}$, and thus the number $M$ must be +lesser than $2^{16}$. So in practice we can choose prime numbers around +256 that are congruent to 3 modulus 4. With 32-bits numbers, only the 4 least significant bits of $x_n$ can be chosen (the maximum number of indistinguishable bits is lesser than or equals to -$log_2(log_2(x_n))$). So to generate a 32 bits number, we need to use -8 times the BBS algorithm with different combinations of $M$. This -approach is not sufficient to pass all the tests of TestU01 because -the fact of having chosen small values of $M$ for the BBS leads to -have a small period. So, in order to add randomness we proceed with +$log_2(log_2(M))$). In other words, to generate a 32-bits number, we need to use +8 times the BBS algorithm with possibly different combinations of $M$. This +approach is not sufficient to be able to pass all the TestU01, +as small values of $M$ for the BBS lead to + small periods. So, in order to add randomness we proceed with the followings modifications. \begin{itemize} \item -First we define 16 arrangement arrays instead of 2 (as described in -algorithm \ref{algo:gpu_kernel2}) but only 2 are used at each call of -the PRNG kernels. In practice, the selection of which combinations -arrays will be used is different for all the threads and is determined +Firstly, we define 16 arrangement arrays instead of 2 (as described in +Algorithm \ref{algo:gpu_kernel2}), but only 2 of them are used at each call of +the PRNG kernels. In practice, the selection of combinations +arrays to be used is different for all the threads. It is determined by using the three last bits of two internal variables used by BBS. -This approach adds more randomness. In algorithm~\ref{algo:bbs_gpu}, -character \& performs the AND bitwise. So using \&7 with a number -gives the last 3 bits, so it provides a number between 0 and 7. +%This approach adds more randomness. +In Algorithm~\ref{algo:bbs_gpu}, +character \& is for the bitwise AND. Thus using \&7 with a number +gives the last 3 bits, providing so a number between 0 and 7. \item -Second, after the generation of the 8 BBS numbers for each thread we -have a 32 bits number for which the period is possibly quite small. So -to add randomness, we generate 4 more BBS numbers which allows us to -shift the 32 bits numbers and add upto 6 new bits. This part is -described in algorithm~\ref{algo:bbs_gpu}. In practice, if we call -{\it strategy}, the number representing the strategy, the last 2 bits -of the first new BBS number are used to make a left shift of at least +Secondly, after the generation of the 8 BBS numbers for each thread, we +have a 32-bits number whose period is possibly quite small. So +to add randomness, we generate 4 more BBS numbers to +shift the 32-bits numbers, and add up to 6 new bits. This improvement is +described in Algorithm~\ref{algo:bbs_gpu}. In practice, the last 2 bits +of the first new BBS number are used to make a left shift of at most 3 bits. The last 3 bits of the second new BBS number are add to the strategy whatever the value of the first left shift. The third and the fourth new BBS numbers are used similarly to apply a new left shift and add 3 new bits. \item -Finally, as we use 8 BBS numbers for each thread, the store of these +Finally, as we use 8 BBS numbers for each thread, the storage of these numbers at the end of the kernel is performed using a rotation. So, internal variable for BBS number 1 is stored in place 2, internal -variable for BBS number 2 is store ind place 3, ... and internal +variable for BBS number 2 is stored in place 3, ..., and finally, internal variable for BBS number 8 is stored in place 1. \end{itemize} - \begin{algorithm} \KwIn{InternalVarBBSArray: array with internal variables of the 8 BBS in global memory\; NumThreads: Number of threads\; -tab: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\;} +array\_comb: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\; +array\_shift[4]=\{0,1,3,7\}\; +} \KwOut{NewNb: array containing random numbers in global memory} \If{threadId is concerned} { retrieve data from InternalVarBBSArray[threadId] in local variables including shared memory and x\; we consider that bbs1 ... bbs8 represent the internal states of the 8 BBS numbers\; offset = threadIdx\%combination\_size\; - o1 = threadIdx-offset+tab[bbs1\&7][offset]\; - o2 = threadIdx-offset+tab[8+bbs2\&7][offset]\; + o1 = threadIdx-offset+array\_comb[bbs1\&7][offset]\; + o2 = threadIdx-offset+array\_comb[8+bbs2\&7][offset]\; \For{i=1 to n} { - t<<=4\; + t$<<$=4\; t|=BBS1(bbs1)\&15\; ...\; - t<<=4\; + t$<<$=4\; t|=BBS8(bbs8)\&15\; - //two new shifts\; - t<<=BBS3(bbs3)\&3\; - t|=BBS1(bbs1)\&7\; - t<<=BBS7(bbs7)\&3\; - t|=BBS2(bbs2)\&7\; - t=t $\hat{ }$ shmem[o1] $\hat{ }$ shmem[o2]\; + \tcp{two new shifts} + shift=BBS3(bbs3)\&3\; + t$<<$=shift\; + t|=BBS1(bbs1)\&array\_shift[shift]\; + shift=BBS7(bbs7)\&3\; + t$<<$=shift\; + t|=BBS2(bbs2)\&array\_shift[shift]\; + t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\; shared\_mem[threadId]=t\; - x = x $\hat{ }$ t\; + x = x\textasciicircum t\; store the new PRNG in NewNb[NumThreads*threadId+i]\; } @@ -1318,21 +1335,44 @@ tab: 2D Arrays containing 16 combinations (in first dimension) of size combinat \label{algo:bbs_gpu} \end{algorithm} -In algorithm~\ref{algo:bbs_gpu}, t<<=4 performs a left shift of 4 bits -on the variable t and stores the result in t. BBS1(bbs1)\&15 selects -the last four bits of the result of BBS1. It should be noticed that -for the two new shifts, we use arbitrarily 4 BBSs that have previously -been used. +In Algorithm~\ref{algo:bbs_gpu}, $n$ is for the quantity of random numbers that +a thread has to generate. The operation t<<=4 performs a left shift of 4 bits +on the variable $t$ and stores the result in $t$, and $BBS1(bbs1)\&15$ selects +the last four bits of the result of $BBS1$. Thus an operation of the form +$t<<=4; t|=BBS1(bbs1)\&15\;$ realizes in $t$ a left shift of 4 bits, and then +puts the 4 last bits of $BBS1(bbs1)$ in the four last positions of $t$. Let us +remark that the initialization $t$ is not a necessity as we fill it 4 bits by 4 +bits, until having obtained 32-bits. The two last new shifts are realized in +order to enlarge the small periods of the BBS used here, to introduce a kind of +variability. In these operations, we make twice a left shift of $t$ of \emph{at + most} 3 bits, represented by \texttt{shift} in the algorithm, and we put +\emph{exactly} the \texttt{shift} last bits from a BBS into the \texttt{shift} +last bits of $t$. For this, an array named \texttt{array\_shift}, containing the +correspondance between the shift and the number obtained with \texttt{shift} 1 +to make the \texttt{and} operation is used. For example, with a left shift of 0, +we make an and operation with 0, with a left shift of 3, we make an and +operation with 7 (represented by 111 in binary mode). + +It should be noticed that this generator has once more the form $x^{n+1} = x^n \oplus S^n$, +where $S^n$ is referred in this algorithm as $t$: each iteration of this +PRNG ends with $x = x \wedge t$. This $S^n$ is only constituted +by secure bits produced by the BBS generator, and thus, due to +Proposition~\ref{cryptopreuve}, the resulted PRNG is cryptographically +secure. -\subsection{A Cryptographically Secure and Chaotic Asymetric Cryptosystem} +\subsection{Toward a Cryptographically Secure and Chaotic Asymmetric Cryptosystem} +\label{Blum-Goldwasser} +We finish this research work by giving some thoughts about the use of +the proposed PRNG in an asymmetric cryptosystem. +This first approach will be further investigated in a future work. \subsubsection{Recalls of the Blum-Goldwasser Probabilistic Cryptosystem} The Blum-Goldwasser cryptosystem is a cryptographically secure asymmetric key encryption algorithm proposed in 1984~\cite{Blum:1985:EPP:19478.19501}. The encryption algorithm -implements an XOR-based stream cipher using the BBS PRNG, in order to generate +implements a XOR-based stream cipher using the BBS PRNG, in order to generate the keystream. Decryption is done by obtaining the initial seed thanks to the final state of the BBS generator and the secret key, thus leading to the reconstruction of the keystream. @@ -1345,48 +1385,72 @@ The public key is $N$, whereas the secret key is the factorization $(p,q)$. Suppose Bob wishes to send a string $m=(m_0, \dots, m_{L-1})$ of $L$ bits to Alice: \begin{enumerate} -\item Bob picks an integer $r$ randomly in the interval $[1,N$ and computes $x_0 = r^2~mod~N$. +\item Bob picks an integer $r$ randomly in the interval $\llbracket 1,N\rrbracket$ and computes $x_0 = r^2~mod~N$. \item He uses the BBS to generate the keystream of $L$ pseudorandom bits $(b_0, \dots, b_{L-1})$, as follows. For $i=0$ to $L-1$, \begin{itemize} \item $i=0$. \item While $i \leqslant L-1$: \begin{itemize} -\item Set $b_i$ equal to the least-significant\footnote{BBS can securely output up to O(loglogN) of the least-significant bits of xi during each round.} bit of $x_i$, +\item Set $b_i$ equal to the least-significant\footnote{As signaled previously, BBS can securely output up to $\mathsf{N} = \lfloor log(log(N)) \rfloor$ of the least-significant bits of $x_i$ during each round.} bit of $x_i$, \item $i=i+1$, \item $x_i = (x_{i-1})^2~mod~N.$ \end{itemize} \end{itemize} -\item The ciphertext is computed by XORing the plaintext bits $m$ with the keystream: $ c = (c_0, \dots, c_{L-1}) = m \oplus b$. +\item The ciphertext is computed by XORing the plaintext bits $m$ with the keystream: $ c = (c_0, \dots, c_{L-1}) = m \oplus b$. This ciphertext is $[c, y]$, where $y=x_{0}^{2^{L}}~mod~N.$ \end{enumerate} -The ciphertext is $(c, y)$, where $y=x_{0}^{2^{L}}~mod~N.$. -When Alice receives $(c_0, \dots, c_{L-1}), y$, she can recover $m$ as follows: +When Alice receives $\left[(c_0, \dots, c_{L-1}), y\right]$, she can recover $m$ as follows: \begin{enumerate} \item Using the secret key $(p,q)$, she computes $r_p = y^{((p+1)/4)^{L}}~mod~p$ and $r_q = y^{((q+1)/4)^{L}}~mod~q$. -\item The initial seed can be obtained using the following procedure: $x_0=q(q^{-1}~{mod}~p)r_p + p(p^{-1}~{mod}~q)r_q~{mod}~N$ -\item Recompute the bit-vector $b$ by using BBS and $x_0$. -\item Compute finally the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$. +\item The initial seed can be obtained using the following procedure: $x_0=q(q^{-1}~{mod}~p)r_p + p(p^{-1}~{mod}~q)r_q~{mod}~N$. +\item She recomputes the bit-vector $b$ by using BBS and $x_0$. +\item Alice computes finally the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$. \end{enumerate} \subsubsection{Proposal of a new Asymmetric Cryptosystem Adapted from Blum-Goldwasser} +We propose to adapt the Blum-Goldwasser protocol as follows. +Let $\mathsf{N} = \lfloor log(log(N)) \rfloor$ be the number of bits that can +be obtained securely with the BBS generator using the public key $N$ of Alice. +Alice will pick randomly $S^0$ in $\llbracket 0, 2^{\mathsf{N}-1}\rrbracket$ too, and +her new public key will be $(S^0, N)$. +To encrypt his message, Bob will compute +\begin{equation} +c = \left(m_0 \oplus (b_0 \oplus S^0), m_1 \oplus (b_0 \oplus b_1 \oplus S^0), \hdots, m_{L-1} \oplus (b_0 \oplus b_1 \hdots \oplus b_{L-1} \oplus S^0) \right) +\end{equation} +instead of $\left(m_0 \oplus b_0, m_1 \oplus b_1, \hdots, m_{L-1} \oplus b_{L-1} \right)$. -\section{Conclusion} - +The same decryption stage as in Blum-Goldwasser leads to the sequence +$\left(m_0 \oplus S^0, m_1 \oplus S^0, \hdots, m_{L-1} \oplus S^0 \right)$. +Thus, with a simple use of $S^0$, Alice can obtained the plaintext. +By doing so, the proposed generator is used in place of BBS, leading to +the inheritance of all the properties presented in this paper. -In this paper we have presented a new class of PRNGs based on chaotic -iterations. We have proven that these PRNGs are chaotic in the sense of Devaney. -We also propose a PRNG cryptographically secure and its implementation on GPU. +\section{Conclusion} -An efficient implementation on GPU based on a xor-like PRNG allows us to -generate a huge number of pseudorandom numbers per second (about -20Gsamples/s). This PRNG succeeds to pass the hardest batteries of TestU01. -In future work we plan to extend this work for parallel PRNG for clusters or -grid computing. +In this paper, a formerly proposed PRNG based on chaotic iterations +has been generalized to improve its speed. It has been proven to be +chaotic according to Devaney. +Efficient implementations on GPU using xor-like PRNGs as input generators +shown that a very large quantity of pseudorandom numbers can be generated per second (about +20Gsamples/s), and that these proposed PRNGs succeed to pass the hardest battery in TestU01, +namely the BigCrush. +Furthermore, we have shown that when the inputted generator is cryptographically +secure, then it is the case too for the PRNG we propose, thus leading to +the possibility to develop fast and secure PRNGs using the GPU architecture. +Thoughts about an improvement of the Blum-Goldwasser cryptosystem, using the +proposed method, has been finally proposed. + +In future work we plan to extend these researches, building a parallel PRNG for clusters or +grid computing. Topological properties of the various proposed generators will be investigated, +and the use of other categories of PRNGs as input will be studied too. The improvement +of Blum-Goldwasser will be deepened. Finally, we +will try to enlarge the quantity of pseudorandom numbers generated per second either +in a simulation context or in a cryptographic one.