X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/prng_gpu.git/blobdiff_plain/47000eb06d8648eaafeadfdda00e1e4ce76e9ee6..3010272fc200ffae4e9223ba48c5f3caf05a4256:/prng_gpu.tex diff --git a/prng_gpu.tex b/prng_gpu.tex index d2e5e05..55fc756 100644 --- a/prng_gpu.tex +++ b/prng_gpu.tex @@ -1,4 +1,5 @@ -\documentclass{article} +%\documentclass{article} +\documentclass[10pt,journal,letterpaper,compsoc]{IEEEtran} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{fullpage} @@ -7,7 +8,7 @@ \usepackage{amscd} \usepackage{moreverb} \usepackage{commath} -\usepackage{algorithm2e} +\usepackage[ruled,vlined]{algorithm2e} \usepackage{listings} \usepackage[standard]{ntheorem} @@ -34,115 +35,199 @@ \newcommand{\alert}[1]{\begin{color}{blue}\textit{#1}\end{color}} -\title{Efficient Generation of Pseudo-Random Numbers based on Chaotic Iterations -on GPU} +\title{Efficient and Cryptographically Secure Generation of Chaotic Pseudorandom Numbers on GPU} \begin{document} -\author{Jacques M. Bahi, Rapha\"{e}l Couturier, and Christophe -Guyeux, Pierre-Cyrille Heam\thanks{Authors in alphabetic order}} - -\maketitle +\author{Jacques M. Bahi, Rapha\"{e}l Couturier, Christophe +Guyeux, and Pierre-Cyrille Héam\thanks{Authors in alphabetic order}} + +\IEEEcompsoctitleabstractindextext{ \begin{abstract} -In this paper we present a new pseudo-random numbers generator (PRNG) on -graphics processing units (GPU). This PRNG is based on chaotic iterations. it -is proven to be chaotic in the Devanay's formulation. We propose an efficient -implementation for GPU which succeeds to the {\it BigCrush}, the hardest -batteries of test of TestU01. Experimentations show that this PRNG can generate -about 20 billions of random numbers per second on Tesla C1060 and NVidia GTX280 +In this paper we present a new pseudorandom number generator (PRNG) on +graphics processing units (GPU). This PRNG is based on the so-called chaotic iterations. It +is firstly proven to be chaotic according to the Devaney's formulation. We thus propose an efficient +implementation for GPU that successfully passes the {\it BigCrush} tests, deemed to be the hardest +battery of tests in TestU01. Experiments show that this PRNG can generate +about 20 billion of random numbers per second on Tesla C1060 and NVidia GTX280 cards. +It is then established that, under reasonable assumptions, the proposed PRNG can be cryptographically +secure. +A chaotic version of the Blum-Goldwasser asymmetric key encryption scheme is finally proposed. \end{abstract} +} -\section{Introduction} +\maketitle -Random numbers are used in many scientific applications and simulations. On -finite state machines, as computers, it is not possible to generate random -numbers but only pseudo-random numbers. In practice, a good pseudo-random numbers -generator (PRNG) needs to verify some features to be used by scientists. It is -important to be able to generate pseudo-random numbers efficiently, the -generation needs to be reproducible and a PRNG needs to satisfy many usual -statistical properties. Finally, from our point a view, it is essential to prove -that a PRNG is chaotic. Concerning the statistical tests, TestU01 is the -best-known public-domain statistical testing package. So we use it for all our -PRNGs, especially the {\it BigCrush} which provides the largest serie of tests. -Concerning the chaotic properties, Devaney~\cite{Devaney} proposed a common -mathematical formulation of chaotic dynamical systems. - -In a previous work~\cite{bgw09:ip} we have proposed a new familly of chaotic -PRNG based on chaotic iterations. We have proven that these PRNGs are -chaotic in the Devaney's sense. In this paper we propose a faster version which -is also proven to be chaotic. - -Although graphics processing units (GPU) was initially designed to accelerate -the manipulation of images, they are nowadays commonly used in many scientific -applications. Therefore, it is important to be able to generate pseudo-random -numbers inside a GPU when a scientific application runs in a GPU. That is why we -also provide an efficient PRNG for GPU respecting based on IC. Such devices -allows us to generated almost 20 billions of random numbers per second. +\IEEEdisplaynotcompsoctitleabstractindextext +\IEEEpeerreviewmaketitle -In order to establish that our PRNGs are chaotic according to the Devaney's -formulation, we extend what we have proposed in~\cite{guyeux10}. -The rest of this paper is organised as follows. In Section~\ref{section:related - works} we review some GPU implementions of PRNG. Section~\ref{section:BASIC - RECALLS} gives some basic recalls on Devanay's formation of chaos and chaotic -iterations. In Section~\ref{sec:pseudo-random} the proof of chaos of our PRNGs -is studied. Section~\ref{sec:efficient prng} presents an efficient -implementation of our chaotic PRNG on a CPU. Section~\ref{sec:efficient prng - gpu} describes the GPU implementation of our chaotic PRNG. In -Section~\ref{sec:experiments} some experimentations are presented. - Finally, we give a conclusion and some perspectives. +\section{Introduction} + +Randomness is of importance in many fields such as scientific simulations or cryptography. +``Random numbers'' can mainly be generated either by a deterministic and reproducible algorithm +called a pseudorandom number generator (PRNG), or by a physical non-deterministic +process having all the characteristics of a random noise, called a truly random number +generator (TRNG). +In this paper, we focus on reproducible generators, useful for instance in +Monte-Carlo based simulators or in several cryptographic schemes. +These domains need PRNGs that are statistically irreproachable. +In some fields such as in numerical simulations, speed is a strong requirement +that is usually attained by using parallel architectures. In that case, +a recurrent problem is that a deflation of the statistical qualities is often +reported, when the parallelization of a good PRNG is realized. +This is why ad-hoc PRNGs for each possible architecture must be found to +achieve both speed and randomness. +On the other side, speed is not the main requirement in cryptography: the great +need is to define \emph{secure} generators able to withstand malicious +attacks. Roughly speaking, an attacker should not be able in practice to make +the distinction between numbers obtained with the secure generator and a true random +sequence. +Finally, a small part of the community working in this domain focuses on a +third requirement, that is to define chaotic generators. +The main idea is to take benefits from a chaotic dynamical system to obtain a +generator that is unpredictable, disordered, sensible to its seed, or in other word chaotic. +Their desire is to map a given chaotic dynamics into a sequence that seems random +and unassailable due to chaos. +However, the chaotic maps used as a pattern are defined in the real line +whereas computers deal with finite precision numbers. +This distortion leads to a deflation of both chaotic properties and speed. +Furthermore, authors of such chaotic generators often claim their PRNG +as secure due to their chaos properties, but there is no obvious relation +between chaos and security as it is understood in cryptography. +This is why the use of chaos for PRNG still remains marginal and disputable. + +The authors' opinion is that topological properties of disorder, as they are +properly defined in the mathematical theory of chaos, can reinforce the quality +of a PRNG. But they are not substitutable for security or statistical perfection. +Indeed, to the authors' mind, such properties can be useful in the two following situations. On the +one hand, a post-treatment based on a chaotic dynamical system can be applied +to a PRNG statistically deflective, in order to improve its statistical +properties. Such an improvement can be found, for instance, in~\cite{bgw09:ip,bcgr11:ip}. +On the other hand, chaos can be added to a fast, statistically perfect PRNG and/or a +cryptographically secure one, in case where chaos can be of interest, +\emph{only if these last properties are not lost during +the proposed post-treatment}. Such an assumption is behind this research work. +It leads to the attempts to define a +family of PRNGs that are chaotic while being fast and statistically perfect, +or cryptographically secure. +Let us finish this paragraph by noticing that, in this paper, +statistical perfection refers to the ability to pass the whole +{\it BigCrush} battery of tests, which is widely considered as the most +stringent statistical evaluation of a sequence claimed as random. +This battery can be found in the well-known TestU01 package~\cite{LEcuyerS07}. +Chaos, for its part, refers to the well-established definition of a +chaotic dynamical system proposed by Devaney~\cite{Devaney}. + + +In a previous work~\cite{bgw09:ip,guyeux10} we have proposed a post-treatment on PRNGs making them behave +as a chaotic dynamical system. Such a post-treatment leads to a new category of +PRNGs. We have shown that proofs of Devaney's chaos can be established for this +family, and that the sequence obtained after this post-treatment can pass the +NIST~\cite{Nist10}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} batteries of tests, even if the inputted generators +cannot. +The proposition of this paper is to improve widely the speed of the formerly +proposed generator, without any lack of chaos or statistical properties. +In particular, a version of this PRNG on graphics processing units (GPU) +is proposed. +Although GPU was initially designed to accelerate +the manipulation of images, they are nowadays commonly used in many scientific +applications. Therefore, it is important to be able to generate pseudorandom +numbers inside a GPU when a scientific application runs in it. This remark +motivates our proposal of a chaotic and statistically perfect PRNG for GPU. +Such device +allows us to generate almost 20 billion of pseudorandom numbers per second. +Furthermore, we show that the proposed post-treatment preserves the +cryptographical security of the inputted PRNG, when this last has such a +property. +Last, but not least, we propose a rewriting of the Blum-Goldwasser asymmetric +key encryption protocol by using the proposed method. + +The remainder of this paper is organized as follows. In Section~\ref{section:related + works} we review some GPU implementations of PRNGs. Section~\ref{section:BASIC + RECALLS} gives some basic recalls on the well-known Devaney's formulation of chaos, + and on an iteration process called ``chaotic +iterations'' on which the post-treatment is based. +The proposed PRNG and its proof of chaos are given in Section~\ref{sec:pseudorandom}. +Section~\ref{sec:efficient PRNG} presents an efficient +implementation of this chaotic PRNG on a CPU, whereas Section~\ref{sec:efficient PRNG + gpu} describes and evaluates theoretically the GPU implementation. +Such generators are experimented in +Section~\ref{sec:experiments}. +We show in Section~\ref{sec:security analysis} that, if the inputted +generator is cryptographically secure, then it is the case too for the +generator provided by the post-treatment. +Such a proof leads to the proposition of a cryptographically secure and +chaotic generator on GPU based on the famous Blum Blum Shum +in Section~\ref{sec:CSGPU}, and to an improvement of the +Blum-Goldwasser protocol in Sect.~\ref{Blum-Goldwasser}. +This research work ends by a conclusion section, in which the contribution is +summarized and intended future work is presented. \section{Related works on GPU based PRNGs} \label{section:related works} -In the litterature many authors have work on defining GPU based PRNGs. We do not -want to be exhaustive and we just give the most significant works from our point -of view. When authors mention the number of random numbers generated per second -we mention it. We consider that a million numbers per second corresponds to -1MSample/s and than a billion numbers per second corresponds to 1GSample/s. - -In \cite{Pang:2008:cec}, the authors define a PRNG based on cellular automata -which does not require high precision integer arithmetics nor bitwise -operations. There is no mention of statistical tests nor proof that this PRNG is -chaotic. Concerning the speed of generation, they can generate about -3.2MSample/s on a GeForce 7800 GTX GPU (which is quite old now). + +Numerous research works on defining GPU based PRNGs have already been proposed in the +literature, so that exhaustivity is impossible. +This is why authors of this document only give reference to the most significant attempts +in this domain, from their subjective point of view. +The quantity of pseudorandom numbers generated per second is mentioned here +only when the information is given in the related work. +A million numbers per second will be simply written as +1MSample/s whereas a billion numbers per second is 1GSample/s. + +In \cite{Pang:2008:cec} a PRNG based on cellular automata is defined +with no requirement to an high precision integer arithmetic or to any bitwise +operations. Authors can generate about +3.2MSamples/s on a GeForce 7800 GTX GPU, which is quite an old card now. +However, there is neither a mention of statistical tests nor any proof of +chaos or cryptography in this document. In \cite{ZRKB10}, the authors propose different versions of efficient GPU PRNGs -based on Lagged Fibonacci, Hybrid Taus or Hybrid Taus. They have used these +based on Lagged Fibonacci or Hybrid Taus. They have used these PRNGs for Langevin simulations of biomolecules fully implemented on -GPU. Performance of the GPU versions are far better than those obtained with a -CPU and these PRNGs succeed to pass the {\it BigCrush} test of TestU01. There is -no mention that their PRNGs have chaos mathematical properties. +GPU. Performances of the GPU versions are far better than those obtained with a +CPU, and these PRNGs succeed to pass the {\it BigCrush} battery of TestU01. +However the evaluations of the proposed PRNGs are only statistical ones. Authors of~\cite{conf/fpga/ThomasHL09} have studied the implementation of some -PRNGs on diferrent computing architectures: CPU, field-programmable gate array -(FPGA), GPU and massively parallel processor. This study is interesting because -it shows the performance of the same PRNGs on different architeture. For -example, the FPGA is globally the fastest architecture and it is also the -efficient one because it provides the fastest number of generated random numbers -per joule. Concerning the GPU, authors can generate betweend 11 and 16GSample/s -with a GTX 280 GPU. The drawback of this work is that those PRNGs only succeed -the {\it Crush} test which is easier than the {\it Big Crush} test. - -Cuda has developped a library for the generation of random numbers called -Curand~\cite{curand11}. Several PRNGs are implemented: -Xorwow~\cite{Marsaglia2003} and some variants of Sobol. Some tests report that -the fastest version provides 15GSample/s on the new Fermi C2050 card. Their -PRNGs fail to succeed the whole tests of TestU01 on only one test. +PRNGs on different computing architectures: CPU, field-programmable gate array +(FPGA), massively parallel processors, and GPU. This study is of interest, because +the performance of the same PRNGs on different architectures are compared. +FPGA appears as the fastest and the most +efficient architecture, providing the fastest number of generated pseudorandom numbers +per joule. +However, we notice that authors can ``only'' generate between 11 and 16GSamples/s +with a GTX 280 GPU, which should be compared with +the results presented in this document. +We can remark too that the PRNGs proposed in~\cite{conf/fpga/ThomasHL09} are only +able to pass the {\it Crush} battery, which is far easier than the {\it Big Crush} one. + +Lastly, Cuda has developed a library for the generation of pseudorandom numbers called +Curand~\cite{curand11}. Several PRNGs are implemented, among +other things +Xorwow~\cite{Marsaglia2003} and some variants of Sobol. The tests reported show that +their fastest version provides 15GSamples/s on the new Fermi C2050 card. +But their PRNGs cannot pass the whole TestU01 battery (only one test is failed). \newline \newline -To the best of our knowledge no GPU implementation have been proven to have chaotic properties. +We can finally remark that, to the best of our knowledge, no GPU implementation has been proven to be chaotic, and the cryptographically secure property has surprisingly never been considered. \section{Basic Recalls} \label{section:BASIC RECALLS} + This section is devoted to basic definitions and terminologies in the fields of -topological chaos and chaotic iterations. +topological chaos and chaotic iterations. We assume the reader is familiar +with basic notions on topology (see for instance~\cite{Devaney}). + + \subsection{Devaney's Chaotic Dynamical Systems} In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and $V_{i}$ @@ -155,7 +240,7 @@ Consider a topological space $(\mathcal{X},\tau)$ and a continuous function $f : \mathcal{X} \rightarrow \mathcal{X}$. \begin{definition} -$f$ is said to be \emph{topologically transitive} if, for any pair of open sets +The function $f$ is said to be \emph{topologically transitive} if, for any pair of open sets $U,V \subset \mathcal{X}$, there exists $k>0$ such that $f^k(U) \cap V \neq \varnothing$. \end{definition} @@ -174,7 +259,7 @@ necessarily the same period). \begin{definition}[Devaney's formulation of chaos~\cite{Devaney}] -$f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and +The function $f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and topologically transitive. \end{definition} @@ -182,12 +267,12 @@ The chaos property is strongly linked to the notion of ``sensitivity'', defined on a metric space $(\mathcal{X},d)$ by: \begin{definition} -\label{sensitivity} $f$ has \emph{sensitive dependence on initial conditions} +\label{sensitivity} The function $f$ has \emph{sensitive dependence on initial conditions} if there exists $\delta >0$ such that, for any $x\in \mathcal{X}$ and any neighborhood $V$ of $x$, there exist $y\in V$ and $n > 0$ such that $d\left(f^{n}(x), f^{n}(y)\right) >\delta $. -$\delta$ is called the \emph{constant of sensitivity} of $f$. +The constant $\delta$ is called the \emph{constant of sensitivity} of $f$. \end{definition} Indeed, Banks \emph{et al.} have proven in~\cite{Banks92} that when $f$ is @@ -247,11 +332,13 @@ are continuous. For further explanations, see, e.g., \cite{guyeux10}. Let $\delta $ be the \emph{discrete Boolean metric}, $\delta (x,y)=0\Leftrightarrow x=y.$ Given a function $f$, define the function: +%%RAPH : ici j'ai coupé la dernière ligne en 2, c'est moche mais bon \begin{equation} \begin{array}{lrll} F_{f}: & \llbracket1;\mathsf{N}\rrbracket\times \mathds{B}^{\mathsf{N}} & \longrightarrow & \mathds{B}^{\mathsf{N}} \\ -& (k,E) & \longmapsto & \left( E_{j}.\delta (k,j)+f(E)_{k}.\overline{\delta +& (k,E) & \longmapsto & \left( E_{j}.\delta (k,j)+ \right.\\ +& & & \left. f(E)_{k}.\overline{\delta (k,j)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},% \end{array}% \end{equation}% @@ -310,9 +397,9 @@ their distance should increase too. \item In addition, if two systems present the same cells and their respective strategies start with the same terms, then the distance between these two points must be small because the evolution of the two systems will be the same for a -while. Indeed, the two dynamical systems start with the same initial condition, -use the same update function, and as strategies are the same for a while, then -components that are updated are the same too. +while. Indeed, both dynamical systems start with the same initial condition, +use the same update function, and as strategies are the same for a while, furthermore +updated components are the same as well. \end{itemize} The distance presented above follows these recommendations. Indeed, if the floor value $\lfloor d(X,Y)\rfloor $ is equal to $n$, then the systems $E, \check{E}$ @@ -321,7 +408,7 @@ measure of the differences between strategies $S$ and $\check{S}$. More precisely, this floating part is less than $10^{-k}$ if and only if the first $k$ terms of the two strategies are equal. Moreover, if the $k^{th}$ digit is nonzero, then the $k^{th}$ terms of the two strategies are different. -The impact of this choice for a distance will be investigate at the end of the document. +The impact of this choice for a distance will be investigated at the end of the document. Finally, it has been established in \cite{guyeux10} that, @@ -344,8 +431,7 @@ The relation between $\Gamma(f)$ and $G_f$ is clear: there exists a path from $x$ to $x'$ in $\Gamma(f)$ if and only if there exists a strategy $s$ such that the parallel iteration of $G_f$ from the initial point $(s,x)$ reaches the point $x'$. - -We have finally proven in \cite{bcgr11:ip} that, +We have then proven in \cite{bcgr11:ip} that, \begin{theorem} @@ -354,18 +440,38 @@ Let $f:\mathds{B}^\mathsf{N}\to\mathds{B}^\mathsf{N}$. $G_f$ is chaotic (accord if and only if $\Gamma(f)$ is strongly connected. \end{theorem} -This result of chaos has lead us to study the possibility to build a -pseudo-random number generator (PRNG) based on the chaotic iterations. +Finally, we have established in \cite{bcgr11:ip} that, +\begin{theorem} + Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its + iteration graph, $\check{M}$ its adjacency + matrix and $M$ + a $n\times n$ matrix defined by + $ + M_{ij} = \frac{1}{n}\check{M}_{ij}$ %\textrm{ + if $i \neq j$ and + $M_{ii} = 1 - \frac{1}{n} \sum\limits_{j=1, j\neq i}^n \check{M}_{ij}$ otherwise. + + If $\Gamma(f)$ is strongly connected, then + the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows + a law that tends to the uniform distribution + if and only if $M$ is a double stochastic matrix. +\end{theorem} + + +These results of chaos and uniform distribution have led us to study the possibility of building a +pseudorandom number generator (PRNG) based on the chaotic iterations. As $G_f$, defined on the domain $\llbracket 1 ; \mathsf{N} \rrbracket^{\mathds{N}} -\times \mathds{B}^\mathsf{N}$, is build from Boolean networks $f : \mathds{B}^\mathsf{N} +\times \mathds{B}^\mathsf{N}$, is built from Boolean networks $f : \mathds{B}^\mathsf{N} \rightarrow \mathds{B}^\mathsf{N}$, we can preserve the theoretical properties on $G_f$ -during implementations (due to the discrete nature of $f$). It is as if +during implementations (due to the discrete nature of $f$). Indeed, it is as if $\mathds{B}^\mathsf{N}$ represents the memory of the computer whereas $\llbracket 1 ; \mathsf{N} -\rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance). +\rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance, in PRNG, or a physical noise in TRNG). +Let us finally remark that the vectorial negation satisfies the hypotheses of both theorems above. + +\section{Application to Pseudorandomness} +\label{sec:pseudorandom} -\section{Application to Pseudo-Randomness} -\label{sec:pseudo-random} -\subsection{A First Pseudo-Random Number Generator} +\subsection{A First Pseudorandom Number Generator} We have proposed in~\cite{bgw09:ip} a new family of generators that receives two PRNGs as inputs. These two generators are mixed with chaotic iterations, @@ -374,8 +480,9 @@ generator taken alone. Furthermore, our generator possesses various chaos properties that none of the generators used as input present. + \begin{algorithm}[h!] -%\begin{scriptsize} +\begin{small} \KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$ ($n$ bits)} \KwOut{a configuration $x$ ($n$ bits)} @@ -387,12 +494,16 @@ $s\leftarrow{\textit{XORshift}(n)}$\; $x\leftarrow{F_f(s,x)}$\; } return $x$\; -%\end{scriptsize} +\end{small} \caption{PRNG with chaotic functions} \label{CI Algorithm} \end{algorithm} + + + \begin{algorithm}[h!] +\begin{small} \KwIn{the internal configuration $z$ (a 32-bit word)} \KwOut{$y$ (a 32-bit word)} $z\leftarrow{z\oplus{(z\ll13)}}$\; @@ -400,7 +511,7 @@ $z\leftarrow{z\oplus{(z\gg17)}}$\; $z\leftarrow{z\oplus{(z\ll5)}}$\; $y\leftarrow{z}$\; return $y$\; -\medskip +\end{small} \caption{An arbitrary round of \textit{XORshift} algorithm} \label{XORshift} \end{algorithm} @@ -410,11 +521,11 @@ return $y$\; This generator is synthesized in Algorithm~\ref{CI Algorithm}. -It takes as input: a function $f$; +It takes as input: a Boolean function $f$ satisfying Theorem~\ref{Th:Caractérisation des IC chaotiques}; an integer $b$, ensuring that the number of executed iterations is at least $b$ and at most $2b+1$; and an initial configuration $x^0$. It returns the new generated configuration $x$. Internally, it embeds two -\textit{XORshift}$(k)$ PRNGs~\cite{Marsaglia2003} that returns integers +\textit{XORshift}$(k)$ PRNGs~\cite{Marsaglia2003} that return integers uniformly distributed into $\llbracket 1 ; k \rrbracket$. \textit{XORshift} is a category of very fast PRNGs designed by George Marsaglia, @@ -423,19 +534,7 @@ with a bit shifted version of it. This PRNG, which has a period of $2^{32}-1=4.29\times10^9$, is summed up in Algorithm~\ref{XORshift}. It is used in our PRNG to compute the strategy length and the strategy elements. - -We have proven in \cite{bcgr11:ip} that, -\begin{theorem} - Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its - iteration graph, $\check{M}$ its adjacency - matrix and $M$ a $n\times n$ matrix defined as in the previous lemma. - If $\Gamma(f)$ is strongly connected, then - the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows - a law that tends to the uniform distribution - if and only if $M$ is a double stochastic matrix. -\end{theorem} - -This former generator as successively passed various batteries of statistical tests, as the NIST tests~\cite{bcgr11:ip}. +This former generator has successively passed various batteries of statistical tests, as the NIST~\cite{bcgr11:ip}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} ones. \subsection{Improving the Speed of the Former Generator} @@ -455,7 +554,7 @@ x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N \label{equation Oplus} \end{equation} where $\oplus$ is for the bitwise exclusive or between two integers. -This rewritten can be understood as follows. The $n-$th term $S^n$ of the +This rewriting can be understood as follows. The $n-$th term $S^n$ of the sequence $S$, which is an integer of $\mathsf{N}$ binary digits, presents the list of cells to update in the state $x^n$ of the system (represented as an integer having $\mathsf{N}$ bits too). More precisely, the $k-$th @@ -479,18 +578,17 @@ where $f$ is the vectorial negation and $\forall n \in \mathds{N}$, $\mathcal{S}^n \subset \llbracket 1, \mathsf{N} \rrbracket$ is such that $k \in \mathcal{S}^n$ if and only if the $k-$th digit in the binary decomposition of $S^n$ is 1. Such chaotic iterations are more general -than the ones presented in Definition \ref{Def:chaotic iterations} for -the fact that, instead of updating only one term at each iteration, +than the ones presented in Definition \ref{Def:chaotic iterations} because, instead of updating only one term at each iteration, we select a subset of components to change. Obviously, replacing Algorithm~\ref{CI Algorithm} by -Equation~\ref{equation Oplus}, possible when the iteration function is +Equation~\ref{equation Oplus}, which is possible when the iteration function is the vectorial negation, leads to a speed improvement. However, proofs of chaos obtained in~\cite{bg10:ij} have been established only for chaotic iterations of the form presented in Definition \ref{Def:chaotic iterations}. The question is now to determine whether the -use of more general chaotic iterations to generate pseudo-random numbers +use of more general chaotic iterations to generate pseudorandom numbers faster, does not deflate their topological chaos properties. \subsection{Proofs of Chaos of the General Formulation of the Chaotic Iterations} @@ -525,12 +623,13 @@ Let us introduce the following function: where $\mathcal{P}\left(X\right)$ is for the powerset of the set $X$, that is, $Y \in \mathcal{P}\left(X\right) \Longleftrightarrow Y \subset X$. Given a function $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, define the function: +%%RAPH : j'ai coupé la dernière ligne en 2, c'est moche \begin{equation} \begin{array}{lrll} F_{f}: & \mathcal{P}\left(\llbracket1;\mathsf{N}\rrbracket \right) \times \mathds{B}^{\mathsf{N}} & \longrightarrow & \mathds{B}^{\mathsf{N}} \\ -& (P,E) & \longmapsto & \left( E_{j}.\chi (j,P)+f(E)_{j}.\overline{\chi -(j,P)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},% +& (P,E) & \longmapsto & \left( E_{j}.\chi (j,P)+\right.\\ +& & &\left.f(E)_{j}.\overline{\chi(j,P)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},% \end{array}% \end{equation}% where + and . are the Boolean addition and product operations, and $\overline{x}$ @@ -542,7 +641,7 @@ Consider the phase space: \end{equation} \noindent and the map defined on $\mathcal{X}$: \begin{equation} -G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), \label{Gf} +G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), %\label{Gf} %%RAPH, j'ai viré ce label qui existe déjà avant... \end{equation} \noindent where $\sigma$ is the \emph{shift} function defined by $\sigma (S^{n})_{n\in \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow (S^{n+1})_{n\in @@ -559,7 +658,7 @@ X^{k+1}=G_{f}(X^k).% \right. \end{equation}% -Another time, a shift function appears as a component of these general chaotic +Once more, a shift function appears as a component of these general chaotic iterations. To study the Devaney's chaos property, a distance between two points @@ -569,17 +668,21 @@ Let us introduce: d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}), \label{nouveau d} \end{equation} -\noindent where -\begin{equation} -\left\{ -\begin{array}{lll} -\displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}% -}\delta (E_{k},\check{E}_{k})}\textrm{ is another time the Hamming distance}, \\ -\displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}% -\sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}.% -\end{array}% -\right. -\end{equation} +\noindent where $ \displaystyle{d_{e}(E,\check{E})} = \displaystyle{\sum_{k=1}^{\mathsf{N}% + }\delta (E_{k},\check{E}_{k})}$ is once more the Hamming distance, and +$ \displaystyle{d_{s}(S,\check{S})} = \displaystyle{\dfrac{9}{\mathsf{N}}% + \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}$, +%%RAPH : ici, j'ai supprimé tous les sauts à la ligne +%% \begin{equation} +%% \left\{ +%% \begin{array}{lll} +%% \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}% +%% }\delta (E_{k},\check{E}_{k})} \textrm{ is once more the Hamming distance}, \\ +%% \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}% +%% \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}.% +%% \end{array}% +%% \right. +%% \end{equation} where $|X|$ is the cardinality of a set $X$ and $A\Delta B$ is for the symmetric difference, defined for sets A, B as $A\,\Delta\,B = (A \setminus B) \cup (B \setminus A)$. @@ -590,7 +693,7 @@ The function $d$ defined in Eq.~\ref{nouveau d} is a metric on $\mathcal{X}$. \begin{proof} $d_e$ is the Hamming distance. We will prove that $d_s$ is a distance -too, thus $d$ will be a distance as sum of two distances. +too, thus $d$, as being the sum of two distances, will also be a distance. \begin{itemize} \item Obviously, $d_s(S,\check{S})\geqslant 0$, and if $S=\check{S}$, then $d_s(S,\check{S})=0$. Conversely, if $d_s(S,\check{S})=0$, then @@ -607,7 +710,7 @@ inequality is obtained. Before being able to study the topological behavior of the general -chaotic iterations, we must firstly establish that: +chaotic iterations, we must first establish that: \begin{proposition} For all $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, the function $G_f$ is continuous on @@ -643,7 +746,7 @@ so, after the $max(n_0, n_1)^{th}$ term, the distance $d$ between these two poin G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is convergent to 0. Let $\varepsilon >0$. \medskip \begin{itemize} -\item If $\varepsilon \geqslant 1$, we see that distance +\item If $\varepsilon \geqslant 1$, we see that the distance between $\left( G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is strictly less than 1 after the $max(n_{0},n_{1})^{th}$ term (same state). \medskip @@ -660,12 +763,14 @@ G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are the same ($G_{f}$ is a shift of strategies) the distance between $(S^n,E^n)$ and $(S,E)$ is strictly less than $% 10^{-(k+1)}\leqslant \varepsilon $.\bigskip \newline In conclusion, -$$ +%%RAPH : ici j'ai rajouté une ligne +\begin{flushleft}$$ \forall \varepsilon >0,\exists N_{0}=max(n_{0},n_{1},n_{2})\in \mathds{N}% -,\forall n\geqslant N_{0}, - d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right) +,\forall n\geqslant N_{0},$$ +$$ d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right) \leqslant \varepsilon . $$ +\end{flushleft} $G_{f}$ is consequently continuous. \end{proof} @@ -705,7 +810,11 @@ where $(s^0,s^1, \hdots)$ is the strategy of $Y$, satisfies the properties claimed in the lemma. \end{proof} -We can now prove the Theorem~\ref{t:chaos des general}... +<<<<<<< HEAD +We can now prove the Theorem~\ref{t:chaos des general}. +======= +We can now prove Theorem~\ref{t:chaos des general}... +>>>>>>> e55d237aba022a66cc2d7650d295b29169878f45 \begin{proof}[Theorem~\ref{t:chaos des general}] Firstly, strong transitivity implies transitivity. @@ -723,8 +832,10 @@ and $t_2\in\mathds{N}$ such that $E$ is reached from $(S',E')$ after $t_2$ iterations of $G_f$. Consider the strategy $\tilde S$ that alternates the first $t_1$ terms -of $S$ and the first $t_2$ terms of $S'$: $$\tilde -S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It +of $S$ and the first $t_2$ terms of $S'$: +%%RAPH : j'ai coupé la ligne en 2 +$$\tilde +S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,$$$$\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It is clear that $(\tilde S,E)$ is obtained from $(\tilde S,E)$ after $t_1+t_2$ iterations of $G_f$. So $(\tilde S,E)$ is a periodic point. Since $\tilde S_t=S_t$ for $t>$32)} in order to obtain the 32 most significant bits of \texttt{t}. + +Thus producing a pseudorandom number needs 6 xor operations with 6 32-bits numbers +that are provided by 3 64-bits PRNGs. This version successfully passes the +stringent BigCrush battery of tests~\cite{LEcuyerS07}. + +\section{Efficient PRNGs based on Chaotic Iterations on GPU} +\label{sec:efficient PRNG gpu} + +In order to take benefits from the computing power of GPU, a program +needs to have independent blocks of threads that can be computed +simultaneously. In general, the larger the number of threads is, the +more local memory is used, and the less branching instructions are +used (if, while, ...), the better the performances on GPU is. +Obviously, having these requirements in mind, it is possible to build +a program similar to the one presented in Listing +\ref{algo:seqCIPRNG}, which computes pseudorandom numbers on GPU. To +do so, we must firstly recall that in the CUDA~\cite{Nvid10} +environment, threads have a local identifier called +\texttt{ThreadIdx}, which is relative to the block containing +them. Furthermore, in CUDA, parts of the code that are executed by the GPU, are +called {\it kernels}. + + +\subsection{Naive Version for GPU} + + +It is possible to deduce from the CPU version a quite similar version adapted to GPU. +The simple principle consists in making each thread of the GPU computing the CPU version of our PRNG. +Of course, the three xor-like +PRNGs used in these computations must have different parameters. +In a given thread, these parameters are +randomly picked from another PRNGs. +The initialization stage is performed by the CPU. +To do it, the ISAAC PRNG~\cite{Jenkins96} is used to set all the +parameters embedded into each thread. + +The implementation of the three +xor-like PRNGs is straightforward when their parameters have been +allocated in the GPU memory. Each xor-like works with an internal +number $x$ that saves the last generated pseudorandom number. Additionally, the +implementation of the xor128, the xorshift, and the xorwow respectively require +4, 5, and 6 unsigned long as internal variables. - - - -In listing~\ref{algo:seqCIprng} a sequential version of our chaotic iterations -based PRNG is presented. The xor operator is represented by \textasciicircum. -This function uses three classical 64-bits PRNG: the \texttt{xorshift}, the -\texttt{xor128} and the \texttt{xorwow}. In the following, we call them -xor-like PRNGSs. These three PRNGs are presented in~\cite{Marsaglia2003}. As -each xor-like PRNG used works with 64-bits and as our PRNG works with 32-bits, -the use of \texttt{(unsigned int)} selects the 32 least significant bits whereas -\texttt{(unsigned int)(t3$>>$32)} selects the 32 most significants bits of the -variable \texttt{t}. So to produce a random number realizes 6 xor operations -with 6 32-bits numbers produced by 3 64-bits PRNG. This version successes the -BigCrush of the TestU01 battery~\cite{LEcuyerS07}. - -\section{Efficient PRNGs based on chaotic iterations on GPU} -\label{sec:efficient prng gpu} - -In order to benefit from computing power of GPU, a program needs to define -independent blocks of threads which can be computed simultaneously. In general, -the larger the number of threads is, the more local memory is used and the less -branching instructions are used (if, while, ...), the better performance is -obtained on GPU. So with algorithm \ref{algo:seqCIprng} presented in the -previous section, it is possible to build a similar program which computes PRNG -on GPU. In the CUDA~\cite{Nvid10} environment, threads have a local -identificator, called \texttt{ThreadIdx} relative to the block containing them. - - -\subsection{Naive version for GPU} - -From the CPU version, it is possible to obtain a quite similar version for GPU. -The principe consists in assigning the computation of a PRNG as in sequential to -each thread of the GPU. Of course, it is essential that the three xor-like -PRNGs used for our computation have different parameters. So we chose them -randomly with another PRNG. As the initialisation is performed by the CPU, we -have chosen to use the ISAAC PRNG~\cite{Jenkins96} to initalize all the -parameters for the GPU version of our PRNG. The implementation of the three -xor-like PRNGs is straightforward as soon as their parameters have been -allocated in the GPU memory. Each xor-like PRNGs used works with an internal -number $x$ which keeps the last generated random numbers. Other internal -variables are also used by the xor-like PRNGs. More precisely, the -implementation of the xor128, the xorshift and the xorwow respectively require -4, 5 and 6 unsigned long as internal variables. - \begin{algorithm} - +\begin{small} \KwIn{InternalVarXorLikeArray: array with internal variables of the 3 xor-like PRNGs in global memory\; -NumThreads: Number of threads\;} +NumThreads: number of threads\;} \KwOut{NewNb: array containing random numbers in global memory} \If{threadIdx is concerned by the computation} { retrieve data from InternalVarXorLikeArray[threadIdx] in local variables\; \For{i=1 to n} { - compute a new PRNG as in Listing\ref{algo:seqCIprng}\; + compute a new PRNG as in Listing\ref{algo:seqCIPRNG}\; store the new PRNG in NewNb[NumThreads*threadIdx+i]\; } store internal variables in InternalVarXorLikeArray[threadIdx]\; } - -\caption{main kernel for the chaotic iterations based PRNG GPU naive version} +\end{small} +\caption{Main kernel of the GPU ``naive'' version of the PRNG based on chaotic iterations} \label{algo:gpu_kernel} \end{algorithm} -Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of PRNG using -GPU. According to the available memory in the GPU and the number of threads -used simultenaously, the number of random numbers that a thread can generate -inside a kernel is limited, i.e. the variable \texttt{n} in -algorithm~\ref{algo:gpu_kernel}. For example, if $100,000$ threads are used and -if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)} -then the memory required to store internals variables of xor-like + + +Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of the proposed PRNG on +GPU. Due to the available memory in the GPU and the number of threads +used simultaneously, the number of random numbers that a thread can generate +inside a kernel is limited (\emph{i.e.}, the variable \texttt{n} in +algorithm~\ref{algo:gpu_kernel}). For instance, if $100,000$ threads are used and +if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)}, +then the memory required to store all of the internals variables of both the xor-like PRNGs\footnote{we multiply this number by $2$ in order to count 32-bits numbers} -and random number of our PRNG is equals to $100,000\times ((4+5+6)\times -2+(1+100))=1,310,000$ 32-bits numbers, i.e. about $52$Mb. +and the pseudorandom numbers generated by our PRNG, is equal to $100,000\times ((4+5+6)\times +2+(1+100))=1,310,000$ 32-bits numbers, that is, approximately $52$Mb. -All the tests performed to pass the BigCrush of TestU01 succeeded. Different -number of threads, called \texttt{NumThreads} in our algorithm, have been tested -upto $10$ millions. -\newline -\newline -{\bf QUESTION : on laisse cette remarque, je suis mitigé !!!} +This generator is able to pass the whole BigCrush battery of tests, for all +the versions that have been tested depending on their number of threads +(called \texttt{NumThreads} in our algorithm, tested up to $5$ million). \begin{remark} -Algorithm~\ref{algo:gpu_kernel} has the advantage to manipulate independent -PRNGs, so this version is easily usable on a cluster of computer. The only thing -to ensure is to use a single ISAAC PRNG. For this, a simple solution consists in -using a master node for the initialization which computes the initial parameters -for all the differents nodes involves in the computation. +The proposed algorithm has the advantage of manipulating independent +PRNGs, so this version is easily adaptable on a cluster of computers too. The only thing +to ensure is to use a single ISAAC PRNG. To achieve this requirement, a simple solution consists in +using a master node for the initialization. This master node computes the initial parameters +for all the different nodes involved in the computation. \end{remark} -\subsection{Improved version for GPU} +\subsection{Improved Version for GPU} As GPU cards using CUDA have shared memory between threads of the same block, it is possible to use this feature in order to simplify the previous algorithm, -i.e., using less than 3 xor-like PRNGs. The solution consists in computing only -one xor-like PRNG by thread, saving it into shared memory and using the results +i.e., to use less than 3 xor-like PRNGs. The solution consists in computing only +one xor-like PRNG by thread, saving it into the shared memory, and then to use the results of some other threads in the same block of threads. In order to define which -thread uses the result of which other one, we can use a permutation array which -contains the indexes of all threads and for which a permutation has been -performed. In Algorithm~\ref{algo:gpu_kernel2}, 2 permutations arrays are used. -The variable \texttt{offset} is computed using the value of -\texttt{permutation\_size}. Then we can compute \texttt{o1} and \texttt{o2} -which represent the indexes of the other threads for which the results are used -by the current thread. In the algorithm, we consider that a 64-bits xor-like -PRNG is used, that is why both 32-bits parts are used. +thread uses the result of which other one, we can use a combination array that +contains the indexes of all threads and for which a combination has been +performed. -This version also succeeds to the {\it BigCrush} batteries of tests. +In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used. The +variable \texttt{offset} is computed using the value of +\texttt{combination\_size}. Then we can compute \texttt{o1} and \texttt{o2} +representing the indexes of the other threads whose results are used by the +current one. In this algorithm, we consider that a 32-bits xor-like PRNG has +been chosen. In practice, we use the xor128 proposed in~\cite{Marsaglia2003} in +which unsigned longs (64 bits) have been replaced by unsigned integers (32 +bits). -\begin{algorithm} +This version can also pass the whole {\it BigCrush} battery of tests. +\begin{algorithm} +\begin{small} \KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs in global memory\; NumThreads: Number of threads\; -tab1, tab2: Arrays containing permutations of size permutation\_size\;} +array\_comb1, array\_comb2: Arrays containing combinations of size combination\_size\;} \KwOut{NewNb: array containing random numbers in global memory} \If{threadId is concerned} { - retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory\; - offset = threadIdx\%permutation\_size\; - o1 = threadIdx-offset+tab1[offset]\; - o2 = threadIdx-offset+tab2[offset]\; + retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory and x\; + offset = threadIdx\%combination\_size\; + o1 = threadIdx-offset+array\_comb1[offset]\; + o2 = threadIdx-offset+array\_comb2[offset]\; \For{i=1 to n} { t=xor-like()\; - t=t$\oplus$shmem[o1]$\oplus$shmem[o2]\; + t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\; shared\_mem[threadId]=t\; - x = x $\oplus$ t\; + x = x\textasciicircum t\; store the new PRNG in NewNb[NumThreads*threadId+i]\; } store internal variables in InternalVarXorLikeArray[threadId]\; } - -\caption{main kernel for the chaotic iterations based PRNG GPU efficient -version} -\label{algo:gpu_kernel2} +\end{small} +\caption{Main kernel for the chaotic iterations based PRNG GPU efficient +version\label{IR}} +\label{algo:gpu_kernel2} \end{algorithm} \subsection{Theoretical Evaluation of the Improved Version} -A run of Algorithm~\ref{algo:gpu_kernel2} consists in three operations having +A run of Algorithm~\ref{algo:gpu_kernel2} consists in an operation ($x=x\oplus t$) having the form of Equation~\ref{equation Oplus}, which is equivalent to the iterative -system of Eq.~\ref{eq:generalIC}. That is, three iterations of the general chaotic -iterations are realized between two stored values of the PRNG. +system of Eq.~\ref{eq:generalIC}. That is, an iteration of the general chaotic +iterations is realized between the last stored value $x$ of the thread and a strategy $t$ +(obtained by a bitwise exclusive or between a value provided by a xor-like() call +and two values previously obtained by two other threads). To be certain that we are in the framework of Theorem~\ref{t:chaos des general}, we must guarantee that this dynamical system iterates on the space $\mathcal{X} = \mathcal{P}\left(\llbracket 1, \mathsf{N} \rrbracket\right)^\mathds{N}\times\mathds{B}^\mathsf{N}$. -The left term $x$ obviously belongs into $\mathds{B}^ \mathsf{N}$. -To prevent from any flaws of chaotic properties, we must check that each right -term, corresponding to terms of the strategies, can possibly be equal to any +The left term $x$ obviously belongs to $\mathds{B}^ \mathsf{N}$. +To prevent from any flaws of chaotic properties, we must check that the right +term (the last $t$), corresponding to the strategies, can possibly be equal to any integer of $\llbracket 1, \mathsf{N} \rrbracket$. -Such a result is obvious for the two first lines, as for the xor-like(), all the -integers belonging into its interval of definition can occur at each iteration. -It can be easily stated for the two last lines by an immediate mathematical -induction. +Such a result is obvious, as for the xor-like(), all the +integers belonging into its interval of definition can occur at each iteration, and thus the +last $t$ respects the requirement. Furthermore, it is possible to +prove by an immediate mathematical induction that, as the initial $x$ +is uniformly distributed (it is provided by a cryptographically secure PRNG), +the two other stored values shmem[o1] and shmem[o2] are uniformly distributed too, +(this is the induction hypothesis), and thus the next $x$ is finally uniformly distributed. Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general chaotic iterations presented previously, and for this reason, it satisfies the @@ -947,570 +1091,65 @@ Devaney's formulation of a chaotic behavior. \label{sec:experiments} Different experiments have been performed in order to measure the generation -speed. We have used a computer equiped with Tesla C1060 NVidia GPU card and an -Intel Xeon E5530 cadenced at 2.40 GHz for our experiments and we have used -another one equipped with a less performant CPU and a GeForce GTX 280. Both +speed. We have used a first computer equipped with a Tesla C1060 NVidia GPU card +and an +Intel Xeon E5530 cadenced at 2.40 GHz, and +a second computer equipped with a smaller CPU and a GeForce GTX 280. +All the cards have 240 cores. -In Figure~\ref{fig:time_gpu} we compare the number of random numbers generated -per second. The xor-like prng is a xor64 described in~\cite{Marsaglia2003}. In -order to obtain the optimal performance we remove the storage of random numbers -in the GPU memory. This step is time consumming and slows down the random number -generation. Moreover, if you are interested by applications that consome random -numbers directly when they are generated, their storage is completely -useless. In this figure we can see that when the number of threads is greater -than approximately 30,000 upto 5 millions the number of random numbers generated -per second is almost constant. With the naive version, it is between 2.5 and -3GSample/s. With the optimized version, it is approximately equals to -20GSample/s. Finally we can remark that both GPU cards are quite similar. In -practice, the Tesla C1060 has more memory than the GTX 280 and this memory +In Figure~\ref{fig:time_xorlike_gpu} we compare the quantity of pseudorandom numbers +generated per second with various xor-like based PRNGs. In this figure, the optimized +versions use the {\it xor64} described in~\cite{Marsaglia2003}, whereas the naive versions +embed the three xor-like PRNGs described in Listing~\ref{algo:seqCIPRNG}. In +order to obtain the optimal performances, the storage of pseudorandom numbers +into the GPU memory has been removed. This step is time consuming and slows down the numbers +generation. Moreover this storage is completely +useless, in case of applications that consume the pseudorandom +numbers directly after generation. We can see that when the number of threads is greater +than approximately 30,000 and lower than 5 million, the number of pseudorandom numbers generated +per second is almost constant. With the naive version, this value ranges from 2.5 to +3GSamples/s. With the optimized version, it is approximately equal to +20GSamples/s. Finally we can remark that both GPU cards are quite similar, but in +practice, the Tesla C1060 has more memory than the GTX 280, and this memory should be of better quality. +As a comparison, Listing~\ref{algo:seqCIPRNG} leads to the generation of about +138MSample/s when using one core of the Xeon E5530. \begin{figure}[htbp] \begin{center} - \includegraphics[scale=.7]{curve_time_gpu.pdf} + \includegraphics[width=\columnwidth]{curve_time_xorlike_gpu.pdf} \end{center} -\caption{Number of random numbers generated per second} -\label{fig:time_gpu} +\caption{Quantity of pseudorandom numbers generated per second with the xorlike-based PRNG} +\label{fig:time_xorlike_gpu} \end{figure} -In comparison, Listing~\ref{algo:seqCIprng} allows us to generate about -138MSample/s with only one core of the Xeon E5530. - - - - - -%% \section{Cryptanalysis of the Proposed PRNG} - - -%% Mettre ici la preuve de PCH - -%\section{The relativity of disorder} -%\label{sec:de la relativité du désordre} - -%In the next two sections, we investigate the impact of the choices that have -%lead to the definitions of measures in Sections \ref{sec:chaotic iterations} and \ref{deuxième def}. - -%\subsection{Impact of the topology's finenesse} - -%Let us firstly introduce the following notations. - -%\begin{notation} -%$\mathcal{X}_\tau$ will denote the topological space -%$\left(\mathcal{X},\tau\right)$, whereas $\mathcal{V}_\tau (x)$ will be the set -%of all the neighborhoods of $x$ when considering the topology $\tau$ (or simply -%$\mathcal{V} (x)$, if there is no ambiguity). -%\end{notation} -%\begin{theorem} -%\label{Th:chaos et finesse} -%Let $\mathcal{X}$ a set and $\tau, \tau'$ two topologies on $\mathcal{X}$ s.t. -%$\tau'$ is finer than $\tau$. Let $f:\mathcal{X} \to \mathcal{X}$, continuous -%both for $\tau$ and $\tau'$. +In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized +BBS-based PRNG on GPU. On the Tesla C1060 we obtain approximately 700MSample/s +and on the GTX 280 about 670MSample/s, which is obviously slower than the +xorlike-based PRNG on GPU. However, we will show in the next sections that this +new PRNG has a strong level of security, which is necessarily paid by a speed +reduction. -%If $(\mathcal{X}_{\tau'},f)$ is chaotic according to Devaney, then -%$(\mathcal{X}_\tau,f)$ is chaotic too. -%\end{theorem} - -%\begin{proof} -%Let us firstly establish the transitivity of $(\mathcal{X}_\tau,f)$. - -%Let $\omega_1, \omega_2$ two open sets of $\tau$. Then $\omega_1, \omega_2 \in -%\tau'$, becaus $\tau'$ is finer than $\tau$. As $f$ is $\tau'-$transitive, we -%can deduce that $\exists n \in \mathds{N}, \omega_1 \cap f^{(n)}(\omega_2) = -%\varnothing$. Consequently, $f$ is $\tau-$transitive. - -%Let us now consider the regularity of $(\mathcal{X}_\tau,f)$, \emph{i.e.}, for -%all $x \in \mathcal{X}$, and for all $\tau-$neighborhood $V$ of $x$, there is a -%periodic point for $f$ into $V$. - -%Let $x \in \mathcal{X}$ and $V \in \mathcal{V}_\tau (x)$ a $\tau-$neighborhood -%of $x$. By definition, $\exists \omega \in \tau, x \in \omega \subset V$. - -%But $\tau \subset \tau'$, so $\omega \in \tau'$, and then $V \in -%\mathcal{V}_{\tau'} (x)$. As $(\mathcal{X}_{\tau'},f)$ is regular, there is a -%periodic point for $f$ into $V$, and the regularity of $(\mathcal{X}_\tau,f)$ is -%proven. -%\end{proof} - -%\subsection{A given system can always be claimed as chaotic} - -%Let $f$ an iteration function on $\mathcal{X}$ having at least a fixed point. -%Then this function is chaotic (in a certain way): - -%\begin{theorem} -%Let $\mathcal{X}$ a nonempty set and $f: \mathcal{X} \to \X$ a function having -%at least a fixed point. -%Then $f$ is $\tau_0-$chaotic, where $\tau_0$ is the trivial (indiscrete) -%topology on $\X$. -%\end{theorem} - - -%\begin{proof} -%$f$ is transitive when $\forall \omega, \omega' \in \tau_0 \setminus -%\{\varnothing\}, \exists n \in \mathds{N}, f^{(n)}(\omega) \cap \omega' \neq -%\varnothing$. -%As $\tau_0 = \left\{ \varnothing, \X \right\}$, this is equivalent to look for -%an integer $n$ s.t. $f^{(n)}\left( \X \right) \cap \X \neq \varnothing$. For -%instance, $n=0$ is appropriate. - -%Let us now consider $x \in \X$ and $V \in \mathcal{V}_{\tau_0} (x)$. Then $V = -%\mathcal{X}$, so $V$ has at least a fixed point for $f$. Consequently $f$ is -%regular, and the result is established. -%\end{proof} - - - - -%\subsection{A given system can always be claimed as non-chaotic} - -%\begin{theorem} -%Let $\mathcal{X}$ be a set and $f: \mathcal{X} \to \X$. -%If $\X$ is infinite, then $\left( \X_{\tau_\infty}, f\right)$ is not chaotic -%(for the Devaney's formulation), where $\tau_\infty$ is the discrete topology. -%\end{theorem} - -%\begin{proof} -%Let us prove it by contradiction, assuming that $\left(\X_{\tau_\infty}, -%f\right)$ is both transitive and regular. - -%Let $x \in \X$ and $\{x\}$ one of its neighborhood. This neighborhood must -%contain a periodic point for $f$, if we want that $\left(\X_{\tau_\infty}, -%f\right)$ is regular. Then $x$ must be a periodic point of $f$. - -%Let $I_x = \left\{ f^{(n)}(x), n \in \mathds{N}\right\}$. This set is finite -%because $x$ is periodic, and $\mathcal{X}$ is infinite, then $\exists y \in -%\mathcal{X}, y \notin I_x$. - -%As $\left(\X_{\tau_\infty}, f\right)$ must be transitive, for all open nonempty -%sets $A$ and $B$, an integer $n$ must satisfy $f^{(n)}(A) \cap B \neq -%\varnothing$. However $\{x\}$ and $\{y\}$ are open sets and $y \notin I_x -%\Rightarrow \forall n, f^{(n)}\left( \{x\} \right) \cap \{y\} = \varnothing$. -%\end{proof} - - - - - - -%\section{Chaos on the order topology} -%\label{sec: chaos order topology} -%\subsection{The phase space is an interval of the real line} - -%\subsubsection{Toward a topological semiconjugacy} - -%In what follows, our intention is to establish, by using a topological -%semiconjugacy, that chaotic iterations over $\mathcal{X}$ can be described as -%iterations on a real interval. To do so, we must firstly introduce some -%notations and terminologies. - -%Let $\mathcal{S}_\mathsf{N}$ be the set of sequences belonging into $\llbracket -%1; \mathsf{N}\rrbracket$ and $\mathcal{X}_{\mathsf{N}} = \mathcal{S}_\mathsf{N} -%\times \B^\mathsf{N}$. - - -%\begin{definition} -%The function $\varphi: \mathcal{S}_{10} \times\mathds{B}^{10} \rightarrow \big[ -%0, 2^{10} \big[$ is defined by: -%\begin{equation} -% \begin{array}{cccl} -%\varphi: & \mathcal{X}_{10} = \mathcal{S}_{10} \times\mathds{B}^{10}& -%\longrightarrow & \big[ 0, 2^{10} \big[ \\ -% & (S,E) = \left((S^0, S^1, \hdots ); (E_0, \hdots, E_9)\right) & \longmapsto & -%\varphi \left((S,E)\right) -%\end{array} -%\end{equation} -%where $\varphi\left((S,E)\right)$ is the real number: -%\begin{itemize} -%\item whose integral part $e$ is $\displaystyle{\sum_{k=0}^9 2^{9-k} E_k}$, that -%is, the binary digits of $e$ are $E_0 ~ E_1 ~ \hdots ~ E_9$. -%\item whose decimal part $s$ is equal to $s = 0,S^0~ S^1~ S^2~ \hdots = -%\sum_{k=1}^{+\infty} 10^{-k} S^{k-1}.$ -%\end{itemize} -%\end{definition} - - - -%$\varphi$ realizes the association between a point of $\mathcal{X}_{10}$ and a -%real number into $\big[ 0, 2^{10} \big[$. We must now translate the chaotic -%iterations $\Go$ on this real interval. To do so, two intermediate functions -%over $\big[ 0, 2^{10} \big[$ must be introduced: - - -%\begin{definition} -%\label{def:e et s} -%Let $x \in \big[ 0, 2^{10} \big[$ and: -%\begin{itemize} -%\item $e_0, \hdots, e_9$ the binary digits of the integral part of $x$: -%$\displaystyle{\lfloor x \rfloor = \sum_{k=0}^{9} 2^{9-k} e_k}$. -%\item $(s^k)_{k\in \mathds{N}}$ the digits of $x$, where the chosen decimal -%decomposition of $x$ is the one that does not have an infinite number of 9: -%$\displaystyle{x = \lfloor x \rfloor + \sum_{k=0}^{+\infty} s^k 10^{-k-1}}$. -%\end{itemize} -%$e$ and $s$ are thus defined as follows: -%\begin{equation} -%\begin{array}{cccl} -%e: & \big[ 0, 2^{10} \big[ & \longrightarrow & \mathds{B}^{10} \\ -% & x & \longmapsto & (e_0, \hdots, e_9) -%\end{array} -%\end{equation} -%and -%\begin{equation} -% \begin{array}{cccc} -%s: & \big[ 0, 2^{10} \big[ & \longrightarrow & \llbracket 0, 9 -%\rrbracket^{\mathds{N}} \\ -% & x & \longmapsto & (s^k)_{k \in \mathds{N}} -%\end{array} -%\end{equation} -%\end{definition} - -%We are now able to define the function $g$, whose goal is to translate the -%chaotic iterations $\Go$ on an interval of $\mathds{R}$. - -%\begin{definition} -%$g:\big[ 0, 2^{10} \big[ \longrightarrow \big[ 0, 2^{10} \big[$ is defined by: -%\begin{equation} -%\begin{array}{cccc} -%g: & \big[ 0, 2^{10} \big[ & \longrightarrow & \big[ 0, 2^{10} \big[ \\ -% & x & \longmapsto & g(x) -%\end{array} -%\end{equation} -%where g(x) is the real number of $\big[ 0, 2^{10} \big[$ defined bellow: -%\begin{itemize} -%\item its integral part has a binary decomposition equal to $e_0', \hdots, -%e_9'$, with: -% \begin{equation} -%e_i' = \left\{ -%\begin{array}{ll} -%e(x)_i & \textrm{ if } i \neq s^0\\ -%e(x)_i + 1 \textrm{ (mod 2)} & \textrm{ if } i = s^0\\ -%\end{array} -%\right. -%\end{equation} -%\item whose decimal part is $s(x)^1, s(x)^2, \hdots$ -%\end{itemize} -%\end{definition} - -%\bigskip - - -%In other words, if $x = \displaystyle{\sum_{k=0}^{9} 2^{9-k} e_k + -%\sum_{k=0}^{+\infty} s^{k} ~10^{-k-1}}$, then: -%\begin{equation} -%g(x) = -%\displaystyle{\sum_{k=0}^{9} 2^{9-k} (e_k + \delta(k,s^0) \textrm{ (mod 2)}) + -%\sum_{k=0}^{+\infty} s^{k+1} 10^{-k-1}}. -%\end{equation} - - -%\subsubsection{Defining a metric on $\big[ 0, 2^{10} \big[$} - -%Numerous metrics can be defined on the set $\big[ 0, 2^{10} \big[$, the most -%usual one being the Euclidian distance recalled bellow: - -%\begin{notation} -%\index{distance!euclidienne} -%$\Delta$ is the Euclidian distance on $\big[ 0, 2^{10} \big[$, that is, -%$\Delta(x,y) = |y-x|^2$. -%\end{notation} - -%\medskip - -%This Euclidian distance does not reproduce exactly the notion of proximity -%induced by our first distance $d$ on $\X$. Indeed $d$ is finer than $\Delta$. -%This is the reason why we have to introduce the following metric: - - - -%\begin{definition} -%Let $x,y \in \big[ 0, 2^{10} \big[$. -%$D$ denotes the function from $\big[ 0, 2^{10} \big[^2$ to $\mathds{R}^+$ -%defined by: $D(x,y) = D_e\left(e(x),e(y)\right) + D_s\left(s(x),s(y)\right)$, -%where: -%\begin{center} -%$\displaystyle{D_e(E,\check{E}) = \sum_{k=0}^\mathsf{9} \delta (E_k, -%\check{E}_k)}$, ~~and~ $\displaystyle{D_s(S,\check{S}) = \sum_{k = 1}^\infty -%\dfrac{|S^k-\check{S}^k|}{10^k}}$. -%\end{center} -%\end{definition} - -%\begin{proposition} -%$D$ is a distance on $\big[ 0, 2^{10} \big[$. -%\end{proposition} - -%\begin{proof} -%The three axioms defining a distance must be checked. -%\begin{itemize} -%\item $D \geqslant 0$, because everything is positive in its definition. If -%$D(x,y)=0$, then $D_e(x,y)=0$, so the integral parts of $x$ and $y$ are equal -%(they have the same binary decomposition). Additionally, $D_s(x,y) = 0$, then -%$\forall k \in \mathds{N}^*, s(x)^k = s(y)^k$. In other words, $x$ and $y$ have -%the same $k-$th decimal digit, $\forall k \in \mathds{N}^*$. And so $x=y$. -%\item $D(x,y)=D(y,x)$. -%\item Finally, the triangular inequality is obtained due to the fact that both -%$\delta$ and $\Delta(x,y)=|x-y|$ satisfy it. -%\end{itemize} -%\end{proof} - - -%The convergence of sequences according to $D$ is not the same than the usual -%convergence related to the Euclidian metric. For instance, if $x^n \to x$ -%according to $D$, then necessarily the integral part of each $x^n$ is equal to -%the integral part of $x$ (at least after a given threshold), and the decimal -%part of $x^n$ corresponds to the one of $x$ ``as far as required''. -%To illustrate this fact, a comparison between $D$ and the Euclidian distance is -%given Figure \ref{fig:comparaison de distances}. These illustrations show that -%$D$ is richer and more refined than the Euclidian distance, and thus is more -%precise. - - -%\begin{figure}[t] -%\begin{center} -% \subfigure[Function $x \to dist(x;1,234) $ on the interval -%$(0;5)$.]{\includegraphics[scale=.35]{DvsEuclidien.pdf}}\quad -% \subfigure[Function $x \to dist(x;3) $ on the interval -%$(0;5)$.]{\includegraphics[scale=.35]{DvsEuclidien2.pdf}} -%\end{center} -%\caption{Comparison between $D$ (in blue) and the Euclidian distane (in green).} -%\label{fig:comparaison de distances} -%\end{figure} - - - - -%\subsubsection{The semiconjugacy} - -%It is now possible to define a topological semiconjugacy between $\mathcal{X}$ -%and an interval of $\mathds{R}$: - -%\begin{theorem} -%Chaotic iterations on the phase space $\mathcal{X}$ are simple iterations on -%$\mathds{R}$, which is illustrated by the semiconjugacy of the diagram bellow: -%\begin{equation*} -%\begin{CD} -%\left(~\mathcal{S}_{10} \times\mathds{B}^{10}, d~\right) @>G_{f_0}>> -%\left(~\mathcal{S}_{10} \times\mathds{B}^{10}, d~\right)\\ -% @V{\varphi}VV @VV{\varphi}V\\ -%\left( ~\big[ 0, 2^{10} \big[, D~\right) @>>g> \left(~\big[ 0, 2^{10} \big[, -%D~\right) -%\end{CD} -%\end{equation*} -%\end{theorem} - -%\begin{proof} -%$\varphi$ has been constructed in order to be continuous and onto. -%\end{proof} - -%In other words, $\mathcal{X}$ is approximately equal to $\big[ 0, 2^\mathsf{N} -%\big[$. - - - - - - -%\subsection{Study of the chaotic iterations described as a real function} - - -%\begin{figure}[t] -%\begin{center} -% \subfigure[ICs on the interval -%$(0,9;1)$.]{\includegraphics[scale=.35]{ICs09a1.pdf}}\quad -% \subfigure[ICs on the interval -%$(0,7;1)$.]{\includegraphics[scale=.35]{ICs07a95.pdf}}\\ -% \subfigure[ICs on the interval -%$(0,5;1)$.]{\includegraphics[scale=.35]{ICs05a1.pdf}}\quad -% \subfigure[ICs on the interval -%$(0;1)$]{\includegraphics[scale=.35]{ICs0a1.pdf}} -%\end{center} -%\caption{Representation of the chaotic iterations.} -%\label{fig:ICs} -%\end{figure} - - - - -%\begin{figure}[t] -%\begin{center} -% \subfigure[ICs on the interval -%$(510;514)$.]{\includegraphics[scale=.35]{ICs510a514.pdf}}\quad -% \subfigure[ICs on the interval -%$(1000;1008)$]{\includegraphics[scale=.35]{ICs1000a1008.pdf}} -%\end{center} -%\caption{ICs on small intervals.} -%\label{fig:ICs2} -%\end{figure} - -%\begin{figure}[t] -%\begin{center} -% \subfigure[ICs on the interval -%$(0;16)$.]{\includegraphics[scale=.3]{ICs0a16.pdf}}\quad -% \subfigure[ICs on the interval -%$(40;70)$.]{\includegraphics[scale=.45]{ICs40a70.pdf}}\quad -%\end{center} -%\caption{General aspect of the chaotic iterations.} -%\label{fig:ICs3} -%\end{figure} - - -%We have written a Python program to represent the chaotic iterations with the -%vectorial negation on the real line $\mathds{R}$. Various representations of -%these CIs are given in Figures \ref{fig:ICs}, \ref{fig:ICs2} and \ref{fig:ICs3}. -%It can be remarked that the function $g$ is a piecewise linear function: it is -%linear on each interval having the form $\left[ \dfrac{n}{10}, -%\dfrac{n+1}{10}\right[$, $n \in \llbracket 0;2^{10}\times 10 \rrbracket$ and its -%slope is equal to 10. Let us justify these claims: - -%\begin{proposition} -%\label{Prop:derivabilite des ICs} -%Chaotic iterations $g$ defined on $\mathds{R}$ have derivatives of all orders on -%$\big[ 0, 2^{10} \big[$, except on the 10241 points in $I$ defined by $\left\{ -%\dfrac{n}{10} ~\big/~ n \in \llbracket 0;2^{10}\times 10\rrbracket \right\}$. - -%Furthermore, on each interval of the form $\left[ \dfrac{n}{10}, -%\dfrac{n+1}{10}\right[$, with $n \in \llbracket 0;2^{10}\times 10 \rrbracket$, -%$g$ is a linear function, having a slope equal to 10: $\forall x \notin I, -%g'(x)=10$. -%\end{proposition} - - -%\begin{proof} -%Let $I_n = \left[ \dfrac{n}{10}, \dfrac{n+1}{10}\right[$, with $n \in \llbracket -%0;2^{10}\times 10 \rrbracket$. All the points of $I_n$ have the same integral -%prat $e$ and the same decimal part $s^0$: on the set $I_n$, functions $e(x)$ -%and $x \mapsto s(x)^0$ of Definition \ref{def:e et s} only depend on $n$. So all -%the images $g(x)$ of these points $x$: -%\begin{itemize} -%\item Have the same integral part, which is $e$, except probably the bit number -%$s^0$. In other words, this integer has approximately the same binary -%decomposition than $e$, the sole exception being the digit $s^0$ (this number is -%then either $e+2^{10-s^0}$ or $e-2^{10-s^0}$, depending on the parity of $s^0$, -%\emph{i.e.}, it is equal to $e+(-1)^{s^0}\times 2^{10-s^0}$). -%\item A shift to the left has been applied to the decimal part $y$, losing by -%doing so the common first digit $s^0$. In other words, $y$ has been mapped into -%$10\times y - s^0$. -%\end{itemize} -%To sum up, the action of $g$ on the points of $I$ is as follows: first, make a -%multiplication by 10, and second, add the same constant to each term, which is -%$\dfrac{1}{10}\left(e+(-1)^{s^0}\times 2^{10-s^0}\right)-s^0$. -%\end{proof} - -%\begin{remark} -%Finally, chaotic iterations are elements of the large family of functions that -%are both chaotic and piecewise linear (like the tent map). -%\end{remark} - - - -%\subsection{Comparison of the two metrics on $\big[ 0, 2^\mathsf{N} \big[$} - -%The two propositions bellow allow to compare our two distances on $\big[ 0, -%2^\mathsf{N} \big[$: - -%\begin{proposition} -%Id: $\left(~\big[ 0, 2^\mathsf{N} \big[,\Delta~\right) \to \left(~\big[ 0, -%2^\mathsf{N} \big[, D~\right)$ is not continuous. -%\end{proposition} - -%\begin{proof} -%The sequence $x^n = 1,999\hdots 999$ constituted by $n$ 9 as decimal part, is -%such that: -%\begin{itemize} -%\item $\Delta (x^n,2) \to 0.$ -%\item But $D(x^n,2) \geqslant 1$, then $D(x^n,2)$ does not converge to 0. -%\end{itemize} - -%The sequential characterization of the continuity concludes the demonstration. -%\end{proof} - - - -%A contrario: - -%\begin{proposition} -%Id: $\left(~\big[ 0, 2^\mathsf{N} \big[,D~\right) \to \left(~\big[ 0, -%2^\mathsf{N} \big[, \Delta ~\right)$ is a continuous fonction. -%\end{proposition} - -%\begin{proof} -%If $D(x^n,x) \to 0$, then $D_e(x^n,x) = 0$ at least for $n$ larger than a given -%threshold, because $D_e$ only returns integers. So, after this threshold, the -%integral parts of all the $x^n$ are equal to the integral part of $x$. - -%Additionally, $D_s(x^n, x) \to 0$, then $\forall k \in \mathds{N}^*, \exists N_k -%\in \mathds{N}, n \geqslant N_k \Rightarrow D_s(x^n,x) \leqslant 10^{-k}$. This -%means that for all $k$, an index $N_k$ can be found such that, $\forall n -%\geqslant N_k$, all the $x^n$ have the same $k$ firsts digits, which are the -%digits of $x$. We can deduce the convergence $\Delta(x^n,x) \to 0$, and thus the -%result. -%\end{proof} - -%The conclusion of these propositions is that the proposed metric is more precise -%than the Euclidian distance, that is: - -%\begin{corollary} -%$D$ is finer than the Euclidian distance $\Delta$. -%\end{corollary} - -%This corollary can be reformulated as follows: - -%\begin{itemize} -%\item The topology produced by $\Delta$ is a subset of the topology produced by -%$D$. -%\item $D$ has more open sets than $\Delta$. -%\item It is harder to converge for the topology $\tau_D$ inherited by $D$, than -%to converge with the one inherited by $\Delta$, which is denoted here by -%$\tau_\Delta$. -%\end{itemize} - - -%\subsection{Chaos of the chaotic iterations on $\mathds{R}$} -%\label{chpt:Chaos des itérations chaotiques sur R} - - - -%\subsubsection{Chaos according to Devaney} - -%We have recalled previously that the chaotic iterations $\left(\Go, -%\mathcal{X}_d\right)$ are chaotic according to the formulation of Devaney. We -%can deduce that they are chaotic on $\mathds{R}$ too, when considering the order -%topology, because: -%\begin{itemize} -%\item $\left(\Go, \mathcal{X}_d\right)$ and $\left(g, \big[ 0, 2^{10} -%\big[_D\right)$ are semiconjugate by $\varphi$, -%\item Then $\left(g, \big[ 0, 2^{10} \big[_D\right)$ is a system chaotic -%according to Devaney, because the semiconjugacy preserve this character. -%\item But the topology generated by $D$ is finer than the topology generated by -%the Euclidian distance $\Delta$ -- which is the order topology. -%\item According to Theorem \ref{Th:chaos et finesse}, we can deduce that the -%chaotic iterations $g$ are indeed chaotic, as defined by Devaney, for the order -%topology on $\mathds{R}$. -%\end{itemize} - -%This result can be formulated as follows. +\begin{figure}[htbp] +\begin{center} + \includegraphics[width=\columnwidth]{curve_time_bbs_gpu.pdf} +\end{center} +\caption{Quantity of pseudorandom numbers generated per second using the BBS-based PRNG} +\label{fig:time_bbs_gpu} +\end{figure} -%\begin{theorem} -%\label{th:IC et topologie de l'ordre} -%The chaotic iterations $g$ on $\mathds{R}$ are chaotic according to the -%Devaney's formulation, when $\mathds{R}$ has his usual topology, which is the -%order topology. -%\end{theorem} +All these experiments allow us to conclude that it is possible to +generate a very large quantity of pseudorandom numbers statistically perfect with the xor-like version. +To a certain extend, it is also the case with the secure BBS-based version, the speed deflation being +explained by the fact that the former version has ``only'' +chaotic properties and statistical perfection, whereas the latter is also cryptographically secure, +as it is shown in the next sections. -%Indeed this result is weaker than the theorem establishing the chaos for the -%finer topology $d$. However the Theorem \ref{th:IC et topologie de l'ordre} -%still remains important. Indeed, we have studied in our previous works a set -%different from the usual set of study ($\mathcal{X}$ instead of $\mathds{R}$), -%in order to be as close as possible from the computer: the properties of -%disorder proved theoretically will then be preserved when computing. However, we -%could wonder whether this change does not lead to a disorder of a lower quality. -%In other words, have we replaced a situation of a good disorder lost when -%computing, to another situation of a disorder preserved but of bad quality. -%Theorem \ref{th:IC et topologie de l'ordre} prove exactly the contrary. -% @@ -1518,25 +1157,25 @@ In comparison, Listing~\ref{algo:seqCIprng} allows us to generate about \section{Security Analysis} - +\label{sec:security analysis} In this section the concatenation of two strings $u$ and $v$ is classically denoted by $uv$. -In a cryptographic context, a pseudo-random generator is a deterministic +In a cryptographic context, a pseudorandom generator is a deterministic algorithm $G$ transforming strings into strings and such that, for any -seed $w$ of length $N$, $G(w)$ (the output of $G$ on the input $w$) has size -$\ell_G(N)$ with $\ell_G(N)>N$. +seed $s$ of length $m$, $G(s)$ (the output of $G$ on the input $s$) has size +$\ell_G(m)$ with $\ell_G(m)>m$. The notion of {\it secure} PRNGs can now be defined as follows. \begin{definition} A cryptographic PRNG $G$ is secure if for any probabilistic polynomial time algorithm $D$, for any positive polynomial $p$, and for all sufficiently -large $k$'s, -$$| \mathrm{Pr}[D(G(U_k))=1]-Pr[D(U_{\ell_G(k)}=1]|< \frac{1}{p(N)},$$ +large $m$'s, +$$| \mathrm{Pr}[D(G(U_m))=1]-Pr[D(U_{\ell_G(m)})=1]|< \frac{1}{p(m)},$$ where $U_r$ is the uniform distribution over $\{0,1\}^r$ and the -probabilities are taken over $U_N$, $U_{\ell_G(N)}$ as well as over the +probabilities are taken over $U_m$, $U_{\ell_G(m)}$ as well as over the internal coin tosses of $D$. \end{definition} @@ -1545,10 +1184,10 @@ distinguish a perfect uniform random generator from $G$ with a non negligible probability. The interested reader is referred to~\cite[chapter~3]{Goldreich} for more information. Note that it is quite easily possible to change the function $\ell$ into any polynomial -function $\ell^\prime$ satisfying $\ell^\prime(N)>N)$~\cite[Chapter 3.3]{Goldreich}. +function $\ell^\prime$ satisfying $\ell^\prime(m)>m)$~\cite[Chapter 3.3]{Goldreich}. The generation schema developed in (\ref{equation Oplus}) is based on a -pseudo-random generator. Let $H$ be a cryptographic PRNG. We may assume, +pseudorandom generator. Let $H$ be a cryptographic PRNG. We may assume, without loss of generality, that for any string $S_0$ of size $N$, the size of $H(S_0)$ is $kN$, with $k>2$. It means that $\ell_H(N)=kN$. Let $S_1,\ldots,S_k$ be the @@ -1556,11 +1195,12 @@ strings of length $N$ such that $H(S_0)=S_1 \ldots S_k$ ($H(S_0)$ is the concate the $S_i$'s). The cryptographic PRNG $X$ defined in (\ref{equation Oplus}) is the algorithm mapping any string of length $2N$ $x_0S_0$ into the string $(x_0\oplus S_0 \oplus S_1)(x_0\oplus S_0 \oplus S_1\oplus S_2)\ldots -(x_o\bigoplus_{i=0}^{i=k}S_i)$. Particularly one has $\ell_{X}(2N)=kN=\ell_H(N)$. +(x_o\bigoplus_{i=0}^{i=k}S_i)$. One in particular has $\ell_{X}(2N)=kN=\ell_H(N)$. We claim now that if this PRNG is secure, then the new one is secure too. \begin{proposition} +\label{cryptopreuve} If $H$ is a secure cryptographic PRNG, then $X$ is a secure cryptographic PRNG too. \end{proposition} @@ -1599,8 +1239,10 @@ $y\bigoplus_{i=1}^{i=j} w_i^\prime=y\bigoplus_{i=1}^{i=j} w_i$. It follows, by a direct induction, that $w_i=w_i^\prime$. Furthermore, since $\mathbb{B}^{kN}$ is finite, each $\varphi_y$ is bijective. Therefore, and using (\ref{PCH-1}), one has +$\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(\varphi_y(U_{kN}))=1]$ and, +therefore, \begin{equation}\label{PCH-2} -\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(\varphi_y(U_{kN}))=1]=\mathrm{Pr}[D(U_{kN})=1]. +\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(U_{kN})=1]. \end{equation} Now, using (\ref{PCH-1}) again, one has for every $x$, @@ -1609,7 +1251,7 @@ D^\prime(H(x))=D(\varphi_y(H(x))), \end{equation} where $y$ is randomly generated. By construction, $\varphi_y(H(x))=X(yx)$, thus -\begin{equation}\label{PCH-3} +\begin{equation}%\label{PCH-3} %%RAPH : j'ai viré ce label qui existe déjà, il est 3 ligne avant D^\prime(H(x))=D(yx), \end{equation} where $y$ is randomly generated. @@ -1619,30 +1261,236 @@ It follows that \mathrm{Pr}[D^\prime(H(U_{N}))=1]=\mathrm{Pr}[D(U_{2N})=1]. \end{equation} From (\ref{PCH-2}) and (\ref{PCH-4}), one can deduce that -there exist a polynomial time probabilistic +there exists a polynomial time probabilistic algorithm $D^\prime$, a positive polynomial $p$, such that for all $k_0$ there exists $N\geq \frac{k_0}{2}$ satisfying $$| \mathrm{Pr}[D(H(U_{N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)},$$ -proving that $H$ is not secure, a contradiction. +proving that $H$ is not secure, which is a contradiction. \end{proof} +\section{Cryptographical Applications} + +\subsection{A Cryptographically Secure PRNG for GPU} +\label{sec:CSGPU} + +It is possible to build a cryptographically secure PRNG based on the previous +algorithm (Algorithm~\ref{algo:gpu_kernel2}). Due to Proposition~\ref{cryptopreuve}, +it simply consists in replacing +the {\it xor-like} PRNG by a cryptographically secure one. +We have chosen the Blum Blum Shum generator~\cite{BBS} (usually denoted by BBS) having the form: +$$x_{n+1}=x_n^2~ mod~ M$$ where $M$ is the product of two prime numbers (these +prime numbers need to be congruent to 3 modulus 4). BBS is known to be +very slow and only usable for cryptographic applications. + + +The modulus operation is the most time consuming operation for current +GPU cards. So in order to obtain quite reasonable performances, it is +required to use only modulus on 32-bits integer numbers. Consequently +$x_n^2$ need to be lesser than $2^{32}$, and thus the number $M$ must be +lesser than $2^{16}$. So in practice we can choose prime numbers around +256 that are congruent to 3 modulus 4. With 32-bits numbers, only the +4 least significant bits of $x_n$ can be chosen (the maximum number of +indistinguishable bits is lesser than or equals to +$log_2(log_2(M))$). In other words, to generate a 32-bits number, we need to use +8 times the BBS algorithm with possibly different combinations of $M$. This +approach is not sufficient to be able to pass all the tests of TestU01, +as small values of $M$ for the BBS lead to + small periods. So, in order to add randomness we have proceeded with +the followings modifications. +\begin{itemize} +\item +Firstly, we define 16 arrangement arrays instead of 2 (as described in +Algorithm \ref{algo:gpu_kernel2}), but only 2 of them are used at each call of +the PRNG kernels. In practice, the selection of combination +arrays to be used is different for all the threads. It is determined +by using the three last bits of two internal variables used by BBS. +%This approach adds more randomness. +In Algorithm~\ref{algo:bbs_gpu}, +character \& is for the bitwise AND. Thus using \&7 with a number +gives the last 3 bits, thus providing a number between 0 and 7. +\item +Secondly, after the generation of the 8 BBS numbers for each thread, we +have a 32-bits number whose period is possibly quite small. So +to add randomness, we generate 4 more BBS numbers to +shift the 32-bits numbers, and add up to 6 new bits. This improvement is +described in Algorithm~\ref{algo:bbs_gpu}. In practice, the last 2 bits +of the first new BBS number are used to make a left shift of at most +3 bits. The last 3 bits of the second new BBS number are added to the +strategy whatever the value of the first left shift. The third and the +fourth new BBS numbers are used similarly to apply a new left shift +and add 3 new bits. +\item +Finally, as we use 8 BBS numbers for each thread, the storage of these +numbers at the end of the kernel is performed using a rotation. So, +internal variable for BBS number 1 is stored in place 2, internal +variable for BBS number 2 is stored in place 3, ..., and finally, internal +variable for BBS number 8 is stored in place 1. +\end{itemize} + +\begin{algorithm} +\begin{small} +\KwIn{InternalVarBBSArray: array with internal variables of the 8 BBS +in global memory\; +NumThreads: Number of threads\; +array\_comb: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\; +array\_shift[4]=\{0,1,3,7\}\; +} + +\KwOut{NewNb: array containing random numbers in global memory} +\If{threadId is concerned} { + retrieve data from InternalVarBBSArray[threadId] in local variables including shared memory and x\; + we consider that bbs1 ... bbs8 represent the internal states of the 8 BBS numbers\; + offset = threadIdx\%combination\_size\; + o1 = threadIdx-offset+array\_comb[bbs1\&7][offset]\; + o2 = threadIdx-offset+array\_comb[8+bbs2\&7][offset]\; + \For{i=1 to n} { + t$<<$=4\; + t|=BBS1(bbs1)\&15\; + ...\; + t$<<$=4\; + t|=BBS8(bbs8)\&15\; + \tcp{two new shifts} + shift=BBS3(bbs3)\&3\; + t$<<$=shift\; + t|=BBS1(bbs1)\&array\_shift[shift]\; + shift=BBS7(bbs7)\&3\; + t$<<$=shift\; + t|=BBS2(bbs2)\&array\_shift[shift]\; + t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\; + shared\_mem[threadId]=t\; + x = x\textasciicircum t\; + + store the new PRNG in NewNb[NumThreads*threadId+i]\; + } + store internal variables in InternalVarXorLikeArray[threadId] using a rotation\; +} +\end{small} +\caption{main kernel for the BBS based PRNG GPU} +\label{algo:bbs_gpu} +\end{algorithm} + +In Algorithm~\ref{algo:bbs_gpu}, $n$ is for the quantity of random numbers that +a thread has to generate. The operation t<<=4 performs a left shift of 4 bits +on the variable $t$ and stores the result in $t$, and $BBS1(bbs1)\&15$ selects +the last four bits of the result of $BBS1$. Thus an operation of the form +$t<<=4; t|=BBS1(bbs1)\&15\;$ realizes in $t$ a left shift of 4 bits, and then +puts the 4 last bits of $BBS1(bbs1)$ in the four last positions of $t$. Let us +remark that the initialization $t$ is not a necessity as we fill it 4 bits by 4 +bits, until having obtained 32-bits. The two last new shifts are realized in +order to enlarge the small periods of the BBS used here, to introduce a kind of +variability. In these operations, we make twice a left shift of $t$ of \emph{at + most} 3 bits, represented by \texttt{shift} in the algorithm, and we put +\emph{exactly} the \texttt{shift} last bits from a BBS into the \texttt{shift} +last bits of $t$. For this, an array named \texttt{array\_shift}, containing the +correspondence between the shift and the number obtained with \texttt{shift} 1 +to make the \texttt{and} operation is used. For example, with a left shift of 0, +we make an and operation with 0, with a left shift of 3, we make an and +operation with 7 (represented by 111 in binary mode). + +It should be noticed that this generator has once more the form $x^{n+1} = x^n \oplus S^n$, +where $S^n$ is referred in this algorithm as $t$: each iteration of this +PRNG ends with $x = x \wedge t$. This $S^n$ is only constituted +by secure bits produced by the BBS generator, and thus, due to +Proposition~\ref{cryptopreuve}, the resulted PRNG is cryptographically +secure. + + + +\subsection{Toward a Cryptographically Secure and Chaotic Asymmetric Cryptosystem} +\label{Blum-Goldwasser} +We finish this research work by giving some thoughts about the use of +the proposed PRNG in an asymmetric cryptosystem. +This first approach will be further investigated in a future work. + +\subsubsection{Recalls of the Blum-Goldwasser Probabilistic Cryptosystem} + +The Blum-Goldwasser cryptosystem is a cryptographically secure asymmetric key encryption algorithm +proposed in 1984~\cite{Blum:1985:EPP:19478.19501}. The encryption algorithm +implements a XOR-based stream cipher using the BBS PRNG, in order to generate +the keystream. Decryption is done by obtaining the initial seed thanks to +the final state of the BBS generator and the secret key, thus leading to the + reconstruction of the keystream. + +The key generation consists in generating two prime numbers $(p,q)$, +randomly and independently of each other, that are + congruent to 3 mod 4, and to compute the modulus $N=pq$. +The public key is $N$, whereas the secret key is the factorization $(p,q)$. + + +Suppose Bob wishes to send a string $m=(m_0, \dots, m_{L-1})$ of $L$ bits to Alice: +\begin{enumerate} +\item Bob picks an integer $r$ randomly in the interval $\llbracket 1,N\rrbracket$ and computes $x_0 = r^2~mod~N$. +\item He uses the BBS to generate the keystream of $L$ pseudorandom bits $(b_0, \dots, b_{L-1})$, as follows. For $i=0$ to $L-1$, +\begin{itemize} +\item $i=0$. +\item While $i \leqslant L-1$: +\begin{itemize} +\item Set $b_i$ equal to the least-significant\footnote{As signaled previously, BBS can securely output up to $\mathsf{N} = \lfloor log(log(N)) \rfloor$ of the least-significant bits of $x_i$ during each round.} bit of $x_i$, +\item $i=i+1$, +\item $x_i = (x_{i-1})^2~mod~N.$ +\end{itemize} +\end{itemize} +\item The ciphertext is computed by XORing the plaintext bits $m$ with the keystream: $ c = (c_0, \dots, c_{L-1}) = m \oplus b$. This ciphertext is $[c, y]$, where $y=x_{0}^{2^{L}}~mod~N.$ +\end{enumerate} + + +When Alice receives $\left[(c_0, \dots, c_{L-1}), y\right]$, she can recover $m$ as follows: +\begin{enumerate} +\item Using the secret key $(p,q)$, she computes $r_p = y^{((p+1)/4)^{L}}~mod~p$ and $r_q = y^{((q+1)/4)^{L}}~mod~q$. +\item The initial seed can be obtained using the following procedure: $x_0=q(q^{-1}~{mod}~p)r_p + p(p^{-1}~{mod}~q)r_q~{mod}~N$. +\item She recomputes the bit-vector $b$ by using BBS and $x_0$. +\item Alice finally computes the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$. +\end{enumerate} +\subsubsection{Proposal of a new Asymmetric Cryptosystem Adapted from Blum-Goldwasser} + +We propose to adapt the Blum-Goldwasser protocol as follows. +Let $\mathsf{N} = \lfloor log(log(N)) \rfloor$ be the number of bits that can +be obtained securely with the BBS generator using the public key $N$ of Alice. +Alice will pick randomly $S^0$ in $\llbracket 0, 2^{\mathsf{N}-1}\rrbracket$ too, and +her new public key will be $(S^0, N)$. + +To encrypt his message, Bob will compute +%%RAPH : ici, j'ai mis un simple $ +%\begin{equation} +$c = \left(m_0 \oplus (b_0 \oplus S^0), m_1 \oplus (b_0 \oplus b_1 \oplus S^0), \hdots, \right.$ +$ \left. m_{L-1} \oplus (b_0 \oplus b_1 \hdots \oplus b_{L-1} \oplus S^0) \right)$ +%%\end{equation} +instead of $\left(m_0 \oplus b_0, m_1 \oplus b_1, \hdots, m_{L-1} \oplus b_{L-1} \right)$. + +The same decryption stage as in Blum-Goldwasser leads to the sequence +$\left(m_0 \oplus S^0, m_1 \oplus S^0, \hdots, m_{L-1} \oplus S^0 \right)$. +Thus, with a simple use of $S^0$, Alice can obtain the plaintext. +By doing so, the proposed generator is used in place of BBS, leading to +the inheritance of all the properties presented in this paper. \section{Conclusion} -In this paper we have presented a new class of PRNGs based on chaotic -iterations. We have proven that these PRNGs are chaotic in the sense of Devenay. +In this paper, a formerly proposed PRNG based on chaotic iterations +has been generalized to improve its speed. It has been proven to be +chaotic according to Devaney. +Efficient implementations on GPU using xor-like PRNGs as input generators +have shown that a very large quantity of pseudorandom numbers can be generated per second (about +20Gsamples/s), and that these proposed PRNGs succeed to pass the hardest battery in TestU01, +namely the BigCrush. +Furthermore, we have shown that when the inputted generator is cryptographically +secure, then it is the case too for the PRNG we propose, thus leading to +the possibility to develop fast and secure PRNGs using the GPU architecture. +Thoughts about an improvement of the Blum-Goldwasser cryptosystem, using the +proposed method, has been finally proposed. + +In future work we plan to extend these researches, building a parallel PRNG for clusters or +grid computing. Topological properties of the various proposed generators will be investigated, +and the use of other categories of PRNGs as input will be studied too. The improvement +of Blum-Goldwasser will be deepened. Finally, we +will try to enlarge the quantity of pseudorandom numbers generated per second either +in a simulation context or in a cryptographic one. -An efficient implementation on GPU allows us to generate a huge number of pseudo -random numbers per second (about 20Gsample/s). Our PRNGs succeed to pass the -hardest batteries of test (TestU01). -In future work we plan to extend our work in order to have cryptographically -secure PRNGs because in some situations this property may be important. -\bibliographystyle{plain} +\bibliographystyle{plain} \bibliography{mabase} \end{document}