1 %\documentclass{article}
2 \documentclass[10pt,journal,letterpaper,compsoc]{IEEEtran}
3 \usepackage[utf8]{inputenc}
4 \usepackage[T1]{fontenc}
11 \usepackage[ruled,vlined]{algorithm2e}
13 \usepackage[standard]{ntheorem}
15 % Pour mathds : les ensembles IR, IN, etc.
18 % Pour avoir des intervalles d'entiers
22 % Pour faire des sous-figures dans les figures
23 \usepackage{subfigure}
27 \newtheorem{notation}{Notation}
29 \newcommand{\X}{\mathcal{X}}
30 \newcommand{\Go}{G_{f_0}}
31 \newcommand{\B}{\mathds{B}}
32 \newcommand{\N}{\mathds{N}}
33 \newcommand{\BN}{\mathds{B}^\mathsf{N}}
36 \newcommand{\alert}[1]{\begin{color}{blue}\textit{#1}\end{color}}
38 \title{Efficient and Cryptographically Secure Generation of Chaotic Pseudorandom Numbers on GPU}
41 \author{Jacques M. Bahi, Rapha\"{e}l Couturier, Christophe
42 Guyeux, and Pierre-Cyrille Héam\thanks{Authors in alphabetic order}}
45 \IEEEcompsoctitleabstractindextext{
47 In this paper we present a new pseudorandom number generator (PRNG) on
48 graphics processing units (GPU). This PRNG is based on the so-called chaotic iterations. It
49 is firstly proven to be chaotic according to the Devaney's formulation. We thus propose an efficient
50 implementation for GPU that successfully passes the {\it BigCrush} tests, deemed to be the hardest
51 battery of tests in TestU01. Experiments show that this PRNG can generate
52 about 20 billion of random numbers per second on Tesla C1060 and NVidia GTX280
54 It is then established that, under reasonable assumptions, the proposed PRNG can be cryptographically
56 A chaotic version of the Blum-Goldwasser asymmetric key encryption scheme is finally proposed.
64 \IEEEdisplaynotcompsoctitleabstractindextext
65 \IEEEpeerreviewmaketitle
68 \section{Introduction}
70 Randomness is of importance in many fields such as scientific simulations or cryptography.
71 ``Random numbers'' can mainly be generated either by a deterministic and reproducible algorithm
72 called a pseudorandom number generator (PRNG), or by a physical non-deterministic
73 process having all the characteristics of a random noise, called a truly random number
75 In this paper, we focus on reproducible generators, useful for instance in
76 Monte-Carlo based simulators or in several cryptographic schemes.
77 These domains need PRNGs that are statistically irreproachable.
78 In some fields such as in numerical simulations, speed is a strong requirement
79 that is usually attained by using parallel architectures. In that case,
80 a recurrent problem is that a deflation of the statistical qualities is often
81 reported, when the parallelization of a good PRNG is realized.
82 This is why ad-hoc PRNGs for each possible architecture must be found to
83 achieve both speed and randomness.
84 On the other side, speed is not the main requirement in cryptography: the great
85 need is to define \emph{secure} generators able to withstand malicious
86 attacks. Roughly speaking, an attacker should not be able in practice to make
87 the distinction between numbers obtained with the secure generator and a true random
89 Finally, a small part of the community working in this domain focuses on a
90 third requirement, that is to define chaotic generators.
91 The main idea is to take benefits from a chaotic dynamical system to obtain a
92 generator that is unpredictable, disordered, sensible to its seed, or in other word chaotic.
93 Their desire is to map a given chaotic dynamics into a sequence that seems random
94 and unassailable due to chaos.
95 However, the chaotic maps used as a pattern are defined in the real line
96 whereas computers deal with finite precision numbers.
97 This distortion leads to a deflation of both chaotic properties and speed.
98 Furthermore, authors of such chaotic generators often claim their PRNG
99 as secure due to their chaos properties, but there is no obvious relation
100 between chaos and security as it is understood in cryptography.
101 This is why the use of chaos for PRNG still remains marginal and disputable.
103 The authors' opinion is that topological properties of disorder, as they are
104 properly defined in the mathematical theory of chaos, can reinforce the quality
105 of a PRNG. But they are not substitutable for security or statistical perfection.
106 Indeed, to the authors' mind, such properties can be useful in the two following situations. On the
107 one hand, a post-treatment based on a chaotic dynamical system can be applied
108 to a PRNG statistically deflective, in order to improve its statistical
109 properties. Such an improvement can be found, for instance, in~\cite{bgw09:ip,bcgr11:ip}.
110 On the other hand, chaos can be added to a fast, statistically perfect PRNG and/or a
111 cryptographically secure one, in case where chaos can be of interest,
112 \emph{only if these last properties are not lost during
113 the proposed post-treatment}. Such an assumption is behind this research work.
114 It leads to the attempts to define a
115 family of PRNGs that are chaotic while being fast and statistically perfect,
116 or cryptographically secure.
117 Let us finish this paragraph by noticing that, in this paper,
118 statistical perfection refers to the ability to pass the whole
119 {\it BigCrush} battery of tests, which is widely considered as the most
120 stringent statistical evaluation of a sequence claimed as random.
121 This battery can be found in the well-known TestU01 package~\cite{LEcuyerS07}.
122 Chaos, for its part, refers to the well-established definition of a
123 chaotic dynamical system proposed by Devaney~\cite{Devaney}.
126 In a previous work~\cite{bgw09:ip,guyeux10} we have proposed a post-treatment on PRNGs making them behave
127 as a chaotic dynamical system. Such a post-treatment leads to a new category of
128 PRNGs. We have shown that proofs of Devaney's chaos can be established for this
129 family, and that the sequence obtained after this post-treatment can pass the
130 NIST~\cite{Nist10}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} batteries of tests, even if the inputted generators
132 The proposition of this paper is to improve widely the speed of the formerly
133 proposed generator, without any lack of chaos or statistical properties.
134 In particular, a version of this PRNG on graphics processing units (GPU)
136 Although GPU was initially designed to accelerate
137 the manipulation of images, they are nowadays commonly used in many scientific
138 applications. Therefore, it is important to be able to generate pseudorandom
139 numbers inside a GPU when a scientific application runs in it. This remark
140 motivates our proposal of a chaotic and statistically perfect PRNG for GPU.
142 allows us to generate almost 20 billion of pseudorandom numbers per second.
143 Furthermore, we show that the proposed post-treatment preserves the
144 cryptographical security of the inputted PRNG, when this last has such a
146 Last, but not least, we propose a rewriting of the Blum-Goldwasser asymmetric
147 key encryption protocol by using the proposed method.
149 The remainder of this paper is organized as follows. In Section~\ref{section:related
150 works} we review some GPU implementations of PRNGs. Section~\ref{section:BASIC
151 RECALLS} gives some basic recalls on the well-known Devaney's formulation of chaos,
152 and on an iteration process called ``chaotic
153 iterations'' on which the post-treatment is based.
154 The proposed PRNG and its proof of chaos are given in Section~\ref{sec:pseudorandom}.
155 Section~\ref{sec:efficient PRNG} presents an efficient
156 implementation of this chaotic PRNG on a CPU, whereas Section~\ref{sec:efficient PRNG
157 gpu} describes and evaluates theoretically the GPU implementation.
158 Such generators are experimented in
159 Section~\ref{sec:experiments}.
160 We show in Section~\ref{sec:security analysis} that, if the inputted
161 generator is cryptographically secure, then it is the case too for the
162 generator provided by the post-treatment.
163 Such a proof leads to the proposition of a cryptographically secure and
164 chaotic generator on GPU based on the famous Blum Blum Shum
165 in Section~\ref{sec:CSGPU}, and to an improvement of the
166 Blum-Goldwasser protocol in Sect.~\ref{Blum-Goldwasser}.
167 This research work ends by a conclusion section, in which the contribution is
168 summarized and intended future work is presented.
173 \section{Related works on GPU based PRNGs}
174 \label{section:related works}
176 Numerous research works on defining GPU based PRNGs have already been proposed in the
177 literature, so that exhaustivity is impossible.
178 This is why authors of this document only give reference to the most significant attempts
179 in this domain, from their subjective point of view.
180 The quantity of pseudorandom numbers generated per second is mentioned here
181 only when the information is given in the related work.
182 A million numbers per second will be simply written as
183 1MSample/s whereas a billion numbers per second is 1GSample/s.
185 In \cite{Pang:2008:cec} a PRNG based on cellular automata is defined
186 with no requirement to an high precision integer arithmetic or to any bitwise
187 operations. Authors can generate about
188 3.2MSamples/s on a GeForce 7800 GTX GPU, which is quite an old card now.
189 However, there is neither a mention of statistical tests nor any proof of
190 chaos or cryptography in this document.
192 In \cite{ZRKB10}, the authors propose different versions of efficient GPU PRNGs
193 based on Lagged Fibonacci or Hybrid Taus. They have used these
194 PRNGs for Langevin simulations of biomolecules fully implemented on
195 GPU. Performances of the GPU versions are far better than those obtained with a
196 CPU, and these PRNGs succeed to pass the {\it BigCrush} battery of TestU01.
197 However the evaluations of the proposed PRNGs are only statistical ones.
200 Authors of~\cite{conf/fpga/ThomasHL09} have studied the implementation of some
201 PRNGs on different computing architectures: CPU, field-programmable gate array
202 (FPGA), massively parallel processors, and GPU. This study is of interest, because
203 the performance of the same PRNGs on different architectures are compared.
204 FPGA appears as the fastest and the most
205 efficient architecture, providing the fastest number of generated pseudorandom numbers
207 However, we notice that authors can ``only'' generate between 11 and 16GSamples/s
208 with a GTX 280 GPU, which should be compared with
209 the results presented in this document.
210 We can remark too that the PRNGs proposed in~\cite{conf/fpga/ThomasHL09} are only
211 able to pass the {\it Crush} battery, which is far easier than the {\it Big Crush} one.
213 Lastly, Cuda has developed a library for the generation of pseudorandom numbers called
214 Curand~\cite{curand11}. Several PRNGs are implemented, among
216 Xorwow~\cite{Marsaglia2003} and some variants of Sobol. The tests reported show that
217 their fastest version provides 15GSamples/s on the new Fermi C2050 card.
218 But their PRNGs cannot pass the whole TestU01 battery (only one test is failed).
221 We can finally remark that, to the best of our knowledge, no GPU implementation has been proven to be chaotic, and the cryptographically secure property has surprisingly never been considered.
223 \section{Basic Recalls}
224 \label{section:BASIC RECALLS}
226 This section is devoted to basic definitions and terminologies in the fields of
227 topological chaos and chaotic iterations. We assume the reader is familiar
228 with basic notions on topology (see for instance~\cite{Devaney}).
231 \subsection{Devaney's Chaotic Dynamical Systems}
233 In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and $V_{i}$
234 denotes the $i^{th}$ component of a vector $V$. $f^{k}=f\circ ...\circ f$
235 is for the $k^{th}$ composition of a function $f$. Finally, the following
236 notation is used: $\llbracket1;N\rrbracket=\{1,2,\hdots,N\}$.
239 Consider a topological space $(\mathcal{X},\tau)$ and a continuous function $f :
240 \mathcal{X} \rightarrow \mathcal{X}$.
243 The function $f$ is said to be \emph{topologically transitive} if, for any pair of open sets
244 $U,V \subset \mathcal{X}$, there exists $k>0$ such that $f^k(U) \cap V \neq
249 An element $x$ is a \emph{periodic point} for $f$ of period $n\in \mathds{N}^*$
250 if $f^{n}(x)=x$.% The set of periodic points of $f$ is denoted $Per(f).$
254 $f$ is said to be \emph{regular} on $(\mathcal{X}, \tau)$ if the set of periodic
255 points for $f$ is dense in $\mathcal{X}$: for any point $x$ in $\mathcal{X}$,
256 any neighborhood of $x$ contains at least one periodic point (without
257 necessarily the same period).
261 \begin{definition}[Devaney's formulation of chaos~\cite{Devaney}]
262 The function $f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and
263 topologically transitive.
266 The chaos property is strongly linked to the notion of ``sensitivity'', defined
267 on a metric space $(\mathcal{X},d)$ by:
270 \label{sensitivity} The function $f$ has \emph{sensitive dependence on initial conditions}
271 if there exists $\delta >0$ such that, for any $x\in \mathcal{X}$ and any
272 neighborhood $V$ of $x$, there exist $y\in V$ and $n > 0$ such that
273 $d\left(f^{n}(x), f^{n}(y)\right) >\delta $.
275 The constant $\delta$ is called the \emph{constant of sensitivity} of $f$.
278 Indeed, Banks \emph{et al.} have proven in~\cite{Banks92} that when $f$ is
279 chaotic and $(\mathcal{X}, d)$ is a metric space, then $f$ has the property of
280 sensitive dependence on initial conditions (this property was formerly an
281 element of the definition of chaos). To sum up, quoting Devaney
282 in~\cite{Devaney}, a chaotic dynamical system ``is unpredictable because of the
283 sensitive dependence on initial conditions. It cannot be broken down or
284 simplified into two subsystems which do not interact because of topological
285 transitivity. And in the midst of this random behavior, we nevertheless have an
286 element of regularity''. Fundamentally different behaviors are consequently
287 possible and occur in an unpredictable way.
291 \subsection{Chaotic Iterations}
292 \label{sec:chaotic iterations}
295 Let us consider a \emph{system} with a finite number $\mathsf{N} \in
296 \mathds{N}^*$ of elements (or \emph{cells}), so that each cell has a
297 Boolean \emph{state}. Having $\mathsf{N}$ Boolean values for these
298 cells leads to the definition of a particular \emph{state of the
299 system}. A sequence which elements belong to $\llbracket 1;\mathsf{N}
300 \rrbracket $ is called a \emph{strategy}. The set of all strategies is
301 denoted by $\llbracket 1, \mathsf{N} \rrbracket^\mathds{N}.$
304 \label{Def:chaotic iterations}
305 The set $\mathds{B}$ denoting $\{0,1\}$, let
306 $f:\mathds{B}^{\mathsf{N}}\longrightarrow \mathds{B}^{\mathsf{N}}$ be
307 a function and $S\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ be a ``strategy''. The so-called
308 \emph{chaotic iterations} are defined by $x^0\in
309 \mathds{B}^{\mathsf{N}}$ and
311 \forall n\in \mathds{N}^{\ast }, \forall i\in
312 \llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
314 x_i^{n-1} & \text{ if }S^n\neq i \\
315 \left(f(x^{n-1})\right)_{S^n} & \text{ if }S^n=i.
320 In other words, at the $n^{th}$ iteration, only the $S^{n}-$th cell is
321 \textquotedblleft iterated\textquotedblright . Note that in a more
322 general formulation, $S^n$ can be a subset of components and
323 $\left(f(x^{n-1})\right)_{S^{n}}$ can be replaced by
324 $\left(f(x^{k})\right)_{S^{n}}$, where $k<n$, describing for example,
325 delays transmission~\cite{Robert1986,guyeux10}. Finally, let us remark that
326 the term ``chaotic'', in the name of these iterations, has \emph{a
327 priori} no link with the mathematical theory of chaos, presented above.
330 Let us now recall how to define a suitable metric space where chaotic iterations
331 are continuous. For further explanations, see, e.g., \cite{guyeux10}.
333 Let $\delta $ be the \emph{discrete Boolean metric}, $\delta
334 (x,y)=0\Leftrightarrow x=y.$ Given a function $f$, define the function:
335 %%RAPH : ici j'ai coupé la dernière ligne en 2, c'est moche mais bon
338 F_{f}: & \llbracket1;\mathsf{N}\rrbracket\times \mathds{B}^{\mathsf{N}} &
339 \longrightarrow & \mathds{B}^{\mathsf{N}} \\
340 & (k,E) & \longmapsto & \left( E_{j}.\delta (k,j)+ \right.\\
341 & & & \left. f(E)_{k}.\overline{\delta
342 (k,j)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},%
345 \noindent where + and . are the Boolean addition and product operations.
346 Consider the phase space:
348 \mathcal{X} = \llbracket 1 ; \mathsf{N} \rrbracket^\mathds{N} \times
349 \mathds{B}^\mathsf{N},
351 \noindent and the map defined on $\mathcal{X}$:
353 G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), \label{Gf}
355 \noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
356 (S^{n})_{n\in \mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow (S^{n+1})_{n\in
357 \mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ and $i$ is the \emph{initial function}
358 $i:(S^{n})_{n\in \mathds{N}} \in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow S^{0}\in \llbracket
359 1;\mathsf{N}\rrbracket$. Then the chaotic iterations proposed in
360 Definition \ref{Def:chaotic iterations} can be described by the following iterations:
364 X^0 \in \mathcal{X} \\
370 With this formulation, a shift function appears as a component of chaotic
371 iterations. The shift function is a famous example of a chaotic
372 map~\cite{Devaney} but its presence is not sufficient enough to claim $G_f$ as
374 To study this claim, a new distance between two points $X = (S,E), Y =
375 (\check{S},\check{E})\in
376 \mathcal{X}$ has been introduced in \cite{guyeux10} as follows:
378 d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
384 \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
385 }\delta (E_{k},\check{E}_{k})}, \\
386 \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
387 \sum_{k=1}^{\infty }\dfrac{|S^k-\check{S}^k|}{10^{k}}}.%
393 This new distance has been introduced to satisfy the following requirements.
395 \item When the number of different cells between two systems is increasing, then
396 their distance should increase too.
397 \item In addition, if two systems present the same cells and their respective
398 strategies start with the same terms, then the distance between these two points
399 must be small because the evolution of the two systems will be the same for a
400 while. Indeed, both dynamical systems start with the same initial condition,
401 use the same update function, and as strategies are the same for a while, furthermore
402 updated components are the same as well.
404 The distance presented above follows these recommendations. Indeed, if the floor
405 value $\lfloor d(X,Y)\rfloor $ is equal to $n$, then the systems $E, \check{E}$
406 differ in $n$ cells ($d_e$ is indeed the Hamming distance). In addition, $d(X,Y) - \lfloor d(X,Y) \rfloor $ is a
407 measure of the differences between strategies $S$ and $\check{S}$. More
408 precisely, this floating part is less than $10^{-k}$ if and only if the first
409 $k$ terms of the two strategies are equal. Moreover, if the $k^{th}$ digit is
410 nonzero, then the $k^{th}$ terms of the two strategies are different.
411 The impact of this choice for a distance will be investigated at the end of the document.
413 Finally, it has been established in \cite{guyeux10} that,
416 Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. Then $G_{f}$ is continuous in
417 the metric space $(\mathcal{X},d)$.
420 The chaotic property of $G_f$ has been firstly established for the vectorial
421 Boolean negation $f(x_1,\hdots, x_\mathsf{N}) = (\overline{x_1},\hdots, \overline{x_\mathsf{N}})$ \cite{guyeux10}. To obtain a characterization, we have secondly
422 introduced the notion of asynchronous iteration graph recalled bellow.
424 Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. The
425 {\emph{asynchronous iteration graph}} associated with $f$ is the
426 directed graph $\Gamma(f)$ defined by: the set of vertices is
427 $\mathds{B}^\mathsf{N}$; for all $x\in\mathds{B}^\mathsf{N}$ and
428 $i\in \llbracket1;\mathsf{N}\rrbracket$,
429 the graph $\Gamma(f)$ contains an arc from $x$ to $F_f(i,x)$.
430 The relation between $\Gamma(f)$ and $G_f$ is clear: there exists a
431 path from $x$ to $x'$ in $\Gamma(f)$ if and only if there exists a
432 strategy $s$ such that the parallel iteration of $G_f$ from the
433 initial point $(s,x)$ reaches the point $x'$.
434 We have then proven in \cite{bcgr11:ip} that,
438 \label{Th:Caractérisation des IC chaotiques}
439 Let $f:\mathds{B}^\mathsf{N}\to\mathds{B}^\mathsf{N}$. $G_f$ is chaotic (according to Devaney)
440 if and only if $\Gamma(f)$ is strongly connected.
443 Finally, we have established in \cite{bcgr11:ip} that,
445 Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its
446 iteration graph, $\check{M}$ its adjacency
448 a $n\times n$ matrix defined by
450 M_{ij} = \frac{1}{n}\check{M}_{ij}$ %\textrm{
452 $M_{ii} = 1 - \frac{1}{n} \sum\limits_{j=1, j\neq i}^n \check{M}_{ij}$ otherwise.
454 If $\Gamma(f)$ is strongly connected, then
455 the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows
456 a law that tends to the uniform distribution
457 if and only if $M$ is a double stochastic matrix.
461 These results of chaos and uniform distribution have led us to study the possibility of building a
462 pseudorandom number generator (PRNG) based on the chaotic iterations.
463 As $G_f$, defined on the domain $\llbracket 1 ; \mathsf{N} \rrbracket^{\mathds{N}}
464 \times \mathds{B}^\mathsf{N}$, is built from Boolean networks $f : \mathds{B}^\mathsf{N}
465 \rightarrow \mathds{B}^\mathsf{N}$, we can preserve the theoretical properties on $G_f$
466 during implementations (due to the discrete nature of $f$). Indeed, it is as if
467 $\mathds{B}^\mathsf{N}$ represents the memory of the computer whereas $\llbracket 1 ; \mathsf{N}
468 \rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance, in PRNG, or a physical noise in TRNG).
469 Let us finally remark that the vectorial negation satisfies the hypotheses of both theorems above.
471 \section{Application to Pseudorandomness}
472 \label{sec:pseudorandom}
474 \subsection{A First Pseudorandom Number Generator}
476 We have proposed in~\cite{bgw09:ip} a new family of generators that receives
477 two PRNGs as inputs. These two generators are mixed with chaotic iterations,
478 leading thus to a new PRNG that improves the statistical properties of each
479 generator taken alone. Furthermore, our generator
480 possesses various chaos properties that none of the generators used as input
484 \begin{algorithm}[h!]
486 \KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$
488 \KwOut{a configuration $x$ ($n$ bits)}
490 $k\leftarrow b + \textit{XORshift}(b)$\;
493 $s\leftarrow{\textit{XORshift}(n)}$\;
494 $x\leftarrow{F_f(s,x)}$\;
498 \caption{PRNG with chaotic functions}
505 \begin{algorithm}[h!]
507 \KwIn{the internal configuration $z$ (a 32-bit word)}
508 \KwOut{$y$ (a 32-bit word)}
509 $z\leftarrow{z\oplus{(z\ll13)}}$\;
510 $z\leftarrow{z\oplus{(z\gg17)}}$\;
511 $z\leftarrow{z\oplus{(z\ll5)}}$\;
515 \caption{An arbitrary round of \textit{XORshift} algorithm}
523 This generator is synthesized in Algorithm~\ref{CI Algorithm}.
524 It takes as input: a Boolean function $f$ satisfying Theorem~\ref{Th:Caractérisation des IC chaotiques};
525 an integer $b$, ensuring that the number of executed iterations is at least $b$
526 and at most $2b+1$; and an initial configuration $x^0$.
527 It returns the new generated configuration $x$. Internally, it embeds two
528 \textit{XORshift}$(k)$ PRNGs~\cite{Marsaglia2003} that return integers
529 uniformly distributed
530 into $\llbracket 1 ; k \rrbracket$.
531 \textit{XORshift} is a category of very fast PRNGs designed by George Marsaglia,
532 which repeatedly uses the transform of exclusive or (XOR, $\oplus$) on a number
533 with a bit shifted version of it. This PRNG, which has a period of
534 $2^{32}-1=4.29\times10^9$, is summed up in Algorithm~\ref{XORshift}. It is used
535 in our PRNG to compute the strategy length and the strategy elements.
537 This former generator has successively passed various batteries of statistical tests, as the NIST~\cite{bcgr11:ip}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} ones.
539 \subsection{Improving the Speed of the Former Generator}
541 Instead of updating only one cell at each iteration, we can try to choose a
542 subset of components and to update them together. Such an attempt leads
543 to a kind of merger of the two sequences used in Algorithm
544 \ref{CI Algorithm}. When the updating function is the vectorial negation,
545 this algorithm can be rewritten as follows:
550 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
551 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus S^n,
554 \label{equation Oplus}
556 where $\oplus$ is for the bitwise exclusive or between two integers.
557 This rewriting can be understood as follows. The $n-$th term $S^n$ of the
558 sequence $S$, which is an integer of $\mathsf{N}$ binary digits, presents
559 the list of cells to update in the state $x^n$ of the system (represented
560 as an integer having $\mathsf{N}$ bits too). More precisely, the $k-$th
561 component of this state (a binary digit) changes if and only if the $k-$th
562 digit in the binary decomposition of $S^n$ is 1.
564 The single basic component presented in Eq.~\ref{equation Oplus} is of
565 ordinary use as a good elementary brick in various PRNGs. It corresponds
566 to the following discrete dynamical system in chaotic iterations:
569 \forall n\in \mathds{N}^{\ast }, \forall i\in
570 \llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
572 x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
573 \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
577 where $f$ is the vectorial negation and $\forall n \in \mathds{N}$,
578 $\mathcal{S}^n \subset \llbracket 1, \mathsf{N} \rrbracket$ is such that
579 $k \in \mathcal{S}^n$ if and only if the $k-$th digit in the binary
580 decomposition of $S^n$ is 1. Such chaotic iterations are more general
581 than the ones presented in Definition \ref{Def:chaotic iterations} because, instead of updating only one term at each iteration,
582 we select a subset of components to change.
585 Obviously, replacing Algorithm~\ref{CI Algorithm} by
586 Equation~\ref{equation Oplus}, which is possible when the iteration function is
587 the vectorial negation, leads to a speed improvement. However, proofs
588 of chaos obtained in~\cite{bg10:ij} have been established
589 only for chaotic iterations of the form presented in Definition
590 \ref{Def:chaotic iterations}. The question is now to determine whether the
591 use of more general chaotic iterations to generate pseudorandom numbers
592 faster, does not deflate their topological chaos properties.
594 \subsection{Proofs of Chaos of the General Formulation of the Chaotic Iterations}
596 Let us consider the discrete dynamical systems in chaotic iterations having
600 \forall n\in \mathds{N}^{\ast }, \forall i\in
601 \llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
603 x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
604 \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
609 In other words, at the $n^{th}$ iteration, only the cells whose id is
610 contained into the set $S^{n}$ are iterated.
612 Let us now rewrite these general chaotic iterations as usual discrete dynamical
613 system of the form $X^{n+1}=f(X^n)$ on an ad hoc metric space. Such a formulation
614 is required in order to study the topological behavior of the system.
616 Let us introduce the following function:
619 \chi: & \llbracket 1; \mathsf{N} \rrbracket \times \mathcal{P}\left(\llbracket 1; \mathsf{N} \rrbracket\right) & \longrightarrow & \mathds{B}\\
620 & (i,X) & \longmapsto & \left\{ \begin{array}{ll} 0 & \textrm{if }i \notin X, \\ 1 & \textrm{if }i \in X, \end{array}\right.
623 where $\mathcal{P}\left(X\right)$ is for the powerset of the set $X$, that is, $Y \in \mathcal{P}\left(X\right) \Longleftrightarrow Y \subset X$.
625 Given a function $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, define the function:
626 %%RAPH : j'ai coupé la dernière ligne en 2, c'est moche
629 F_{f}: & \mathcal{P}\left(\llbracket1;\mathsf{N}\rrbracket \right) \times \mathds{B}^{\mathsf{N}} &
630 \longrightarrow & \mathds{B}^{\mathsf{N}} \\
631 & (P,E) & \longmapsto & \left( E_{j}.\chi (j,P)+\right.\\
632 & & &\left.f(E)_{j}.\overline{\chi(j,P)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},%
635 where + and . are the Boolean addition and product operations, and $\overline{x}$
636 is the negation of the Boolean $x$.
637 Consider the phase space:
639 \mathcal{X} = \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N} \times
640 \mathds{B}^\mathsf{N},
642 \noindent and the map defined on $\mathcal{X}$:
644 G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), %\label{Gf} %%RAPH, j'ai viré ce label qui existe déjà avant...
646 \noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
647 (S^{n})_{n\in \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow (S^{n+1})_{n\in
648 \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}$ and $i$ is the \emph{initial function}
649 $i:(S^{n})_{n\in \mathds{N}} \in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow S^{0}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)$.
650 Then the general chaotic iterations defined in Equation \ref{general CIs} can
651 be described by the following discrete dynamical system:
655 X^0 \in \mathcal{X} \\
661 Once more, a shift function appears as a component of these general chaotic
664 To study the Devaney's chaos property, a distance between two points
665 $X = (S,E), Y = (\check{S},\check{E})$ of $\mathcal{X}$ must be defined.
668 d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
671 \noindent where $ \displaystyle{d_{e}(E,\check{E})} = \displaystyle{\sum_{k=1}^{\mathsf{N}%
672 }\delta (E_{k},\check{E}_{k})}$ is once more the Hamming distance, and
673 $ \displaystyle{d_{s}(S,\check{S})} = \displaystyle{\dfrac{9}{\mathsf{N}}%
674 \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}$,
675 %%RAPH : ici, j'ai supprimé tous les sauts à la ligne
678 %% \begin{array}{lll}
679 %% \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
680 %% }\delta (E_{k},\check{E}_{k})} \textrm{ is once more the Hamming distance}, \\
681 %% \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
682 %% \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}.%
686 where $|X|$ is the cardinality of a set $X$ and $A\Delta B$ is for the symmetric difference, defined for sets A, B as
687 $A\,\Delta\,B = (A \setminus B) \cup (B \setminus A)$.
691 The function $d$ defined in Eq.~\ref{nouveau d} is a metric on $\mathcal{X}$.
695 $d_e$ is the Hamming distance. We will prove that $d_s$ is a distance
696 too, thus $d$, as being the sum of two distances, will also be a distance.
698 \item Obviously, $d_s(S,\check{S})\geqslant 0$, and if $S=\check{S}$, then
699 $d_s(S,\check{S})=0$. Conversely, if $d_s(S,\check{S})=0$, then
700 $\forall k \in \mathds{N}, |S^k\Delta {S}^k|=0$, and so $\forall k, S^k=\check{S}^k$.
701 \item $d_s$ is symmetric
702 ($d_s(S,\check{S})=d_s(\check{S},S)$) due to the commutative property
703 of the symmetric difference.
704 \item Finally, $|S \Delta S''| = |(S \Delta \varnothing) \Delta S''|= |S \Delta (S'\Delta S') \Delta S''|= |(S \Delta S') \Delta (S' \Delta S'')|\leqslant |S \Delta S'| + |S' \Delta S''|$,
705 and so for all subsets $S,S',$ and $S''$ of $\llbracket 1, \mathsf{N} \rrbracket$,
706 we have $d_s(S,S'') \leqslant d_e(S,S')+d_s(S',S'')$, and the triangle
707 inequality is obtained.
712 Before being able to study the topological behavior of the general
713 chaotic iterations, we must first establish that:
716 For all $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, the function $G_f$ is continuous on
717 $\left( \mathcal{X},d\right)$.
722 We use the sequential continuity.
723 Let $(S^n,E^n)_{n\in \mathds{N}}$ be a sequence of the phase space $%
724 \mathcal{X}$, which converges to $(S,E)$. We will prove that $\left(
725 G_{f}(S^n,E^n)\right) _{n\in \mathds{N}}$ converges to $\left(
726 G_{f}(S,E)\right) $. Let us remark that for all $n$, $S^n$ is a strategy,
727 thus, we consider a sequence of strategies (\emph{i.e.}, a sequence of
729 As $d((S^n,E^n);(S,E))$ converges to 0, each distance $d_{e}(E^n,E)$ and $d_{s}(S^n,S)$ converges
730 to 0. But $d_{e}(E^n,E)$ is an integer, so $\exists n_{0}\in \mathds{N},$ $%
731 d_{e}(E^n,E)=0$ for any $n\geqslant n_{0}$.\newline
732 In other words, there exists a threshold $n_{0}\in \mathds{N}$ after which no
733 cell will change its state:
734 $\exists n_{0}\in \mathds{N},n\geqslant n_{0}\Rightarrow E^n = E.$
736 In addition, $d_{s}(S^n,S)\longrightarrow 0,$ so $\exists n_{1}\in %
737 \mathds{N},d_{s}(S^n,S)<10^{-1}$ for all indexes greater than or equal to $%
738 n_{1}$. This means that for $n\geqslant n_{1}$, all the $S^n$ have the same
739 first term, which is $S^0$: $\forall n\geqslant n_{1},S_0^n=S_0.$
741 Thus, after the $max(n_{0},n_{1})^{th}$ term, states of $E^n$ and $E$ are
742 identical and strategies $S^n$ and $S$ start with the same first term.\newline
743 Consequently, states of $G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are equal,
744 so, after the $max(n_0, n_1)^{th}$ term, the distance $d$ between these two points is strictly less than 1.\newline
745 \noindent We now prove that the distance between $\left(
746 G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is convergent to
747 0. Let $\varepsilon >0$. \medskip
749 \item If $\varepsilon \geqslant 1$, we see that the distance
750 between $\left( G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is
751 strictly less than 1 after the $max(n_{0},n_{1})^{th}$ term (same state).
753 \item If $\varepsilon <1$, then $\exists k\in \mathds{N},10^{-k}\geqslant
754 \varepsilon > 10^{-(k+1)}$. But $d_{s}(S^n,S)$ converges to 0, so
756 \exists n_{2}\in \mathds{N},\forall n\geqslant
757 n_{2},d_{s}(S^n,S)<10^{-(k+2)},
759 thus after $n_{2}$, the $k+2$ first terms of $S^n$ and $S$ are equal.
761 \noindent As a consequence, the $k+1$ first entries of the strategies of $%
762 G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are the same ($G_{f}$ is a shift of strategies) and due to the definition of $d_{s}$, the floating part of
763 the distance between $(S^n,E^n)$ and $(S,E)$ is strictly less than $%
764 10^{-(k+1)}\leqslant \varepsilon $.\bigskip \newline
766 %%RAPH : ici j'ai rajouté une ligne
768 \forall \varepsilon >0,\exists N_{0}=max(n_{0},n_{1},n_{2})\in \mathds{N}%
769 ,\forall n\geqslant N_{0},$$
770 $$ d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right)
771 \leqslant \varepsilon .
774 $G_{f}$ is consequently continuous.
778 It is now possible to study the topological behavior of the general chaotic
779 iterations. We will prove that,
782 \label{t:chaos des general}
783 The general chaotic iterations defined on Equation~\ref{general CIs} satisfy
784 the Devaney's property of chaos.
787 Let us firstly prove the following lemma.
789 \begin{lemma}[Strong transitivity]
791 For all couples $X,Y \in \mathcal{X}$ and any neighborhood $V$ of $X$, we can
792 find $n \in \mathds{N}^*$ and $X' \in V$ such that $G^n(X')=Y$.
796 Let $X=(S,E)$, $\varepsilon>0$, and $k_0 = \lfloor log_{10}(\varepsilon)+1 \rfloor$.
797 Any point $X'=(S',E')$ such that $E'=E$ and $\forall k \leqslant k_0, S'^k=S^k$,
798 are in the open ball $\mathcal{B}\left(X,\varepsilon\right)$. Let us define
799 $\check{X} = \left(\check{S},\check{E}\right)$, where $\check{X}= G^{k_0}(X)$.
800 We denote by $s\subset \llbracket 1; \mathsf{N} \rrbracket$ the set of coordinates
801 that are different between $\check{E}$ and the state of $Y$. Thus each point $X'$ of
802 the form $(S',E')$ where $E'=E$ and $S'$ starts with
803 $(S^0, S^1, \hdots, S^{k_0},s,\hdots)$, verifies the following properties:
805 \item $X'$ is in $\mathcal{B}\left(X,\varepsilon\right)$,
806 \item the state of $G_f^{k_0+1}(X')$ is the state of $Y$.
808 Finally the point $\left(\left(S^0, S^1, \hdots, S^{k_0},s,s^0, s^1, \hdots\right); E\right)$,
809 where $(s^0,s^1, \hdots)$ is the strategy of $Y$, satisfies the properties
810 claimed in the lemma.
814 We can now prove the Theorem~\ref{t:chaos des general}.
816 We can now prove Theorem~\ref{t:chaos des general}...
817 >>>>>>> e55d237aba022a66cc2d7650d295b29169878f45
819 \begin{proof}[Theorem~\ref{t:chaos des general}]
820 Firstly, strong transitivity implies transitivity.
822 Let $(S,E) \in\mathcal{X}$ and $\varepsilon >0$. To
823 prove that $G_f$ is regular, it is sufficient to prove that
824 there exists a strategy $\tilde S$ such that the distance between
825 $(\tilde S,E)$ and $(S,E)$ is less than $\varepsilon$, and such that
826 $(\tilde S,E)$ is a periodic point.
828 Let $t_1=\lfloor-\log_{10}(\varepsilon)\rfloor$, and let $E'$ be the
829 configuration that we obtain from $(S,E)$ after $t_1$ iterations of
830 $G_f$. As $G_f$ is strongly transitive, there exists a strategy $S'$
831 and $t_2\in\mathds{N}$ such
832 that $E$ is reached from $(S',E')$ after $t_2$ iterations of $G_f$.
834 Consider the strategy $\tilde S$ that alternates the first $t_1$ terms
835 of $S$ and the first $t_2$ terms of $S'$:
836 %%RAPH : j'ai coupé la ligne en 2
838 S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,$$$$\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It
839 is clear that $(\tilde S,E)$ is obtained from $(\tilde S,E)$ after
840 $t_1+t_2$ iterations of $G_f$. So $(\tilde S,E)$ is a periodic
841 point. Since $\tilde S_t=S_t$ for $t<t_1$, by the choice of $t_1$, we
842 have $d((S,E),(\tilde S,E))<\epsilon$.
847 \section{Efficient PRNG based on Chaotic Iterations}
848 \label{sec:efficient PRNG}
850 Based on the proof presented in the previous section, it is now possible to
851 improve the speed of the generator formerly presented in~\cite{bgw09:ip,guyeux10}.
852 The first idea is to consider
853 that the provided strategy is a pseudorandom Boolean vector obtained by a
855 An iteration of the system is simply the bitwise exclusive or between
856 the last computed state and the current strategy.
857 Topological properties of disorder exhibited by chaotic
858 iterations can be inherited by the inputted generator, we hope by doing so to
859 obtain some statistical improvements while preserving speed.
861 %%RAPH : j'ai viré tout ca
862 %% Let us give an example using 16-bits numbers, to clearly understand how the bitwise xor operations
865 %% Suppose that $x$ and the strategy $S^i$ are given as
867 %% Table~\ref{TableExemple} shows the result of $x \oplus S^i$.
870 %% \begin{scriptsize}
872 %% \begin{array}{|cc|cccccccccccccccc|}
874 %% x &=&1&0&1&1&1&0&1&0&1&0&0&1&0&0&1&0\\
876 %% S^i &=&0&1&1&0&0&1&1&0&1&1&1&0&0&1&1&1\\
878 %% x \oplus S^i&=&1&1&0&1&1&1&0&0&0&1&1&1&0&1&0&1\\
885 %% \caption{Example of an arbitrary round of the proposed generator}
886 %% \label{TableExemple}
892 \lstset{language=C,caption={C code of the sequential PRNG based on chaotic iterations},label=algo:seqCIPRNG}
896 unsigned int CIPRNG() {
897 static unsigned int x = 123123123;
898 unsigned long t1 = xorshift();
899 unsigned long t2 = xor128();
900 unsigned long t3 = xorwow();
901 x = x^(unsigned int)t1;
902 x = x^(unsigned int)(t2>>32);
903 x = x^(unsigned int)(t3>>32);
904 x = x^(unsigned int)t2;
905 x = x^(unsigned int)(t1>>32);
906 x = x^(unsigned int)t3;
914 In Listing~\ref{algo:seqCIPRNG} a sequential version of the proposed PRNG based
915 on chaotic iterations is presented. The xor operator is represented by
916 \textasciicircum. This function uses three classical 64-bits PRNGs, namely the
917 \texttt{xorshift}, the \texttt{xor128}, and the
918 \texttt{xorwow}~\cite{Marsaglia2003}. In the following, we call them ``xor-like
919 PRNGs''. As each xor-like PRNG uses 64-bits whereas our proposed generator
920 works with 32-bits, we use the command \texttt{(unsigned int)}, that selects the
921 32 least significant bits of a given integer, and the code \texttt{(unsigned
922 int)(t$>>$32)} in order to obtain the 32 most significant bits of \texttt{t}.
924 Thus producing a pseudorandom number needs 6 xor operations with 6 32-bits numbers
925 that are provided by 3 64-bits PRNGs. This version successfully passes the
926 stringent BigCrush battery of tests~\cite{LEcuyerS07}.
928 \section{Efficient PRNGs based on Chaotic Iterations on GPU}
929 \label{sec:efficient PRNG gpu}
931 In order to take benefits from the computing power of GPU, a program
932 needs to have independent blocks of threads that can be computed
933 simultaneously. In general, the larger the number of threads is, the
934 more local memory is used, and the less branching instructions are
935 used (if, while, ...), the better the performances on GPU is.
936 Obviously, having these requirements in mind, it is possible to build
937 a program similar to the one presented in Listing
938 \ref{algo:seqCIPRNG}, which computes pseudorandom numbers on GPU. To
939 do so, we must firstly recall that in the CUDA~\cite{Nvid10}
940 environment, threads have a local identifier called
941 \texttt{ThreadIdx}, which is relative to the block containing
942 them. Furthermore, in CUDA, parts of the code that are executed by the GPU, are
943 called {\it kernels}.
946 \subsection{Naive Version for GPU}
949 It is possible to deduce from the CPU version a quite similar version adapted to GPU.
950 The simple principle consists in making each thread of the GPU computing the CPU version of our PRNG.
951 Of course, the three xor-like
952 PRNGs used in these computations must have different parameters.
953 In a given thread, these parameters are
954 randomly picked from another PRNGs.
955 The initialization stage is performed by the CPU.
956 To do it, the ISAAC PRNG~\cite{Jenkins96} is used to set all the
957 parameters embedded into each thread.
959 The implementation of the three
960 xor-like PRNGs is straightforward when their parameters have been
961 allocated in the GPU memory. Each xor-like works with an internal
962 number $x$ that saves the last generated pseudorandom number. Additionally, the
963 implementation of the xor128, the xorshift, and the xorwow respectively require
964 4, 5, and 6 unsigned long as internal variables.
969 \KwIn{InternalVarXorLikeArray: array with internal variables of the 3 xor-like
970 PRNGs in global memory\;
971 NumThreads: number of threads\;}
972 \KwOut{NewNb: array containing random numbers in global memory}
973 \If{threadIdx is concerned by the computation} {
974 retrieve data from InternalVarXorLikeArray[threadIdx] in local variables\;
976 compute a new PRNG as in Listing\ref{algo:seqCIPRNG}\;
977 store the new PRNG in NewNb[NumThreads*threadIdx+i]\;
979 store internal variables in InternalVarXorLikeArray[threadIdx]\;
982 \caption{Main kernel of the GPU ``naive'' version of the PRNG based on chaotic iterations}
983 \label{algo:gpu_kernel}
988 Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of the proposed PRNG on
989 GPU. Due to the available memory in the GPU and the number of threads
990 used simultaneously, the number of random numbers that a thread can generate
991 inside a kernel is limited (\emph{i.e.}, the variable \texttt{n} in
992 algorithm~\ref{algo:gpu_kernel}). For instance, if $100,000$ threads are used and
993 if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)},
994 then the memory required to store all of the internals variables of both the xor-like
995 PRNGs\footnote{we multiply this number by $2$ in order to count 32-bits numbers}
996 and the pseudorandom numbers generated by our PRNG, is equal to $100,000\times ((4+5+6)\times
997 2+(1+100))=1,310,000$ 32-bits numbers, that is, approximately $52$Mb.
999 This generator is able to pass the whole BigCrush battery of tests, for all
1000 the versions that have been tested depending on their number of threads
1001 (called \texttt{NumThreads} in our algorithm, tested up to $5$ million).
1004 The proposed algorithm has the advantage of manipulating independent
1005 PRNGs, so this version is easily adaptable on a cluster of computers too. The only thing
1006 to ensure is to use a single ISAAC PRNG. To achieve this requirement, a simple solution consists in
1007 using a master node for the initialization. This master node computes the initial parameters
1008 for all the different nodes involved in the computation.
1011 \subsection{Improved Version for GPU}
1013 As GPU cards using CUDA have shared memory between threads of the same block, it
1014 is possible to use this feature in order to simplify the previous algorithm,
1015 i.e., to use less than 3 xor-like PRNGs. The solution consists in computing only
1016 one xor-like PRNG by thread, saving it into the shared memory, and then to use the results
1017 of some other threads in the same block of threads. In order to define which
1018 thread uses the result of which other one, we can use a combination array that
1019 contains the indexes of all threads and for which a combination has been
1022 In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used. The
1023 variable \texttt{offset} is computed using the value of
1024 \texttt{combination\_size}. Then we can compute \texttt{o1} and \texttt{o2}
1025 representing the indexes of the other threads whose results are used by the
1026 current one. In this algorithm, we consider that a 32-bits xor-like PRNG has
1027 been chosen. In practice, we use the xor128 proposed in~\cite{Marsaglia2003} in
1028 which unsigned longs (64 bits) have been replaced by unsigned integers (32
1031 This version can also pass the whole {\it BigCrush} battery of tests.
1035 \KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs
1037 NumThreads: Number of threads\;
1038 array\_comb1, array\_comb2: Arrays containing combinations of size combination\_size\;}
1040 \KwOut{NewNb: array containing random numbers in global memory}
1041 \If{threadId is concerned} {
1042 retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory and x\;
1043 offset = threadIdx\%combination\_size\;
1044 o1 = threadIdx-offset+array\_comb1[offset]\;
1045 o2 = threadIdx-offset+array\_comb2[offset]\;
1048 t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
1049 shared\_mem[threadId]=t\;
1050 x = x\textasciicircum t\;
1052 store the new PRNG in NewNb[NumThreads*threadId+i]\;
1054 store internal variables in InternalVarXorLikeArray[threadId]\;
1057 \caption{Main kernel for the chaotic iterations based PRNG GPU efficient
1059 \label{algo:gpu_kernel2}
1062 \subsection{Theoretical Evaluation of the Improved Version}
1064 A run of Algorithm~\ref{algo:gpu_kernel2} consists in an operation ($x=x\oplus t$) having
1065 the form of Equation~\ref{equation Oplus}, which is equivalent to the iterative
1066 system of Eq.~\ref{eq:generalIC}. That is, an iteration of the general chaotic
1067 iterations is realized between the last stored value $x$ of the thread and a strategy $t$
1068 (obtained by a bitwise exclusive or between a value provided by a xor-like() call
1069 and two values previously obtained by two other threads).
1070 To be certain that we are in the framework of Theorem~\ref{t:chaos des general},
1071 we must guarantee that this dynamical system iterates on the space
1072 $\mathcal{X} = \mathcal{P}\left(\llbracket 1, \mathsf{N} \rrbracket\right)^\mathds{N}\times\mathds{B}^\mathsf{N}$.
1073 The left term $x$ obviously belongs to $\mathds{B}^ \mathsf{N}$.
1074 To prevent from any flaws of chaotic properties, we must check that the right
1075 term (the last $t$), corresponding to the strategies, can possibly be equal to any
1076 integer of $\llbracket 1, \mathsf{N} \rrbracket$.
1078 Such a result is obvious, as for the xor-like(), all the
1079 integers belonging into its interval of definition can occur at each iteration, and thus the
1080 last $t$ respects the requirement. Furthermore, it is possible to
1081 prove by an immediate mathematical induction that, as the initial $x$
1082 is uniformly distributed (it is provided by a cryptographically secure PRNG),
1083 the two other stored values shmem[o1] and shmem[o2] are uniformly distributed too,
1084 (this is the induction hypothesis), and thus the next $x$ is finally uniformly distributed.
1086 Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general
1087 chaotic iterations presented previously, and for this reason, it satisfies the
1088 Devaney's formulation of a chaotic behavior.
1090 \section{Experiments}
1091 \label{sec:experiments}
1093 Different experiments have been performed in order to measure the generation
1094 speed. We have used a first computer equipped with a Tesla C1060 NVidia GPU card
1096 Intel Xeon E5530 cadenced at 2.40 GHz, and
1097 a second computer equipped with a smaller CPU and a GeForce GTX 280.
1099 cards have 240 cores.
1101 In Figure~\ref{fig:time_xorlike_gpu} we compare the quantity of pseudorandom numbers
1102 generated per second with various xor-like based PRNGs. In this figure, the optimized
1103 versions use the {\it xor64} described in~\cite{Marsaglia2003}, whereas the naive versions
1104 embed the three xor-like PRNGs described in Listing~\ref{algo:seqCIPRNG}. In
1105 order to obtain the optimal performances, the storage of pseudorandom numbers
1106 into the GPU memory has been removed. This step is time consuming and slows down the numbers
1107 generation. Moreover this storage is completely
1108 useless, in case of applications that consume the pseudorandom
1109 numbers directly after generation. We can see that when the number of threads is greater
1110 than approximately 30,000 and lower than 5 million, the number of pseudorandom numbers generated
1111 per second is almost constant. With the naive version, this value ranges from 2.5 to
1112 3GSamples/s. With the optimized version, it is approximately equal to
1113 20GSamples/s. Finally we can remark that both GPU cards are quite similar, but in
1114 practice, the Tesla C1060 has more memory than the GTX 280, and this memory
1115 should be of better quality.
1116 As a comparison, Listing~\ref{algo:seqCIPRNG} leads to the generation of about
1117 138MSample/s when using one core of the Xeon E5530.
1119 \begin{figure}[htbp]
1121 \includegraphics[width=\columnwidth]{curve_time_xorlike_gpu.pdf}
1123 \caption{Quantity of pseudorandom numbers generated per second with the xorlike-based PRNG}
1124 \label{fig:time_xorlike_gpu}
1131 In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized
1132 BBS-based PRNG on GPU. On the Tesla C1060 we obtain approximately 700MSample/s
1133 and on the GTX 280 about 670MSample/s, which is obviously slower than the
1134 xorlike-based PRNG on GPU. However, we will show in the next sections that this
1135 new PRNG has a strong level of security, which is necessarily paid by a speed
1138 \begin{figure}[htbp]
1140 \includegraphics[width=\columnwidth]{curve_time_bbs_gpu.pdf}
1142 \caption{Quantity of pseudorandom numbers generated per second using the BBS-based PRNG}
1143 \label{fig:time_bbs_gpu}
1146 All these experiments allow us to conclude that it is possible to
1147 generate a very large quantity of pseudorandom numbers statistically perfect with the xor-like version.
1148 To a certain extend, it is also the case with the secure BBS-based version, the speed deflation being
1149 explained by the fact that the former version has ``only''
1150 chaotic properties and statistical perfection, whereas the latter is also cryptographically secure,
1151 as it is shown in the next sections.
1159 \section{Security Analysis}
1160 \label{sec:security analysis}
1164 In this section the concatenation of two strings $u$ and $v$ is classically
1166 In a cryptographic context, a pseudorandom generator is a deterministic
1167 algorithm $G$ transforming strings into strings and such that, for any
1168 seed $s$ of length $m$, $G(s)$ (the output of $G$ on the input $s$) has size
1169 $\ell_G(m)$ with $\ell_G(m)>m$.
1170 The notion of {\it secure} PRNGs can now be defined as follows.
1173 A cryptographic PRNG $G$ is secure if for any probabilistic polynomial time
1174 algorithm $D$, for any positive polynomial $p$, and for all sufficiently
1176 $$| \mathrm{Pr}[D(G(U_m))=1]-Pr[D(U_{\ell_G(m)})=1]|< \frac{1}{p(m)},$$
1177 where $U_r$ is the uniform distribution over $\{0,1\}^r$ and the
1178 probabilities are taken over $U_m$, $U_{\ell_G(m)}$ as well as over the
1179 internal coin tosses of $D$.
1182 Intuitively, it means that there is no polynomial time algorithm that can
1183 distinguish a perfect uniform random generator from $G$ with a non
1184 negligible probability. The interested reader is referred
1185 to~\cite[chapter~3]{Goldreich} for more information. Note that it is
1186 quite easily possible to change the function $\ell$ into any polynomial
1187 function $\ell^\prime$ satisfying $\ell^\prime(m)>m)$~\cite[Chapter 3.3]{Goldreich}.
1189 The generation schema developed in (\ref{equation Oplus}) is based on a
1190 pseudorandom generator. Let $H$ be a cryptographic PRNG. We may assume,
1191 without loss of generality, that for any string $S_0$ of size $N$, the size
1192 of $H(S_0)$ is $kN$, with $k>2$. It means that $\ell_H(N)=kN$.
1193 Let $S_1,\ldots,S_k$ be the
1194 strings of length $N$ such that $H(S_0)=S_1 \ldots S_k$ ($H(S_0)$ is the concatenation of
1195 the $S_i$'s). The cryptographic PRNG $X$ defined in (\ref{equation Oplus})
1196 is the algorithm mapping any string of length $2N$ $x_0S_0$ into the string
1197 $(x_0\oplus S_0 \oplus S_1)(x_0\oplus S_0 \oplus S_1\oplus S_2)\ldots
1198 (x_o\bigoplus_{i=0}^{i=k}S_i)$. One in particular has $\ell_{X}(2N)=kN=\ell_H(N)$.
1199 We claim now that if this PRNG is secure,
1200 then the new one is secure too.
1203 \label{cryptopreuve}
1204 If $H$ is a secure cryptographic PRNG, then $X$ is a secure cryptographic
1209 The proposition is proved by contraposition. Assume that $X$ is not
1210 secure. By Definition, there exists a polynomial time probabilistic
1211 algorithm $D$, a positive polynomial $p$, such that for all $k_0$ there exists
1212 $N\geq \frac{k_0}{2}$ satisfying
1213 $$| \mathrm{Pr}[D(X(U_{2N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)}.$$
1214 We describe a new probabilistic algorithm $D^\prime$ on an input $w$ of size
1217 \item Decompose $w$ into $w=w_1\ldots w_{k}$, where each $w_i$ has size $N$.
1218 \item Pick a string $y$ of size $N$ uniformly at random.
1219 \item Compute $z=(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y
1220 \bigoplus_{i=1}^{i=k} w_i).$
1221 \item Return $D(z)$.
1225 Consider for each $y\in \mathbb{B}^{kN}$ the function $\varphi_{y}$
1226 from $\mathbb{B}^{kN}$ into $\mathbb{B}^{kN}$ mapping $w=w_1\ldots w_k$
1227 (each $w_i$ has length $N$) to
1228 $(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y
1229 \bigoplus_{i=1}^{i=k_1} w_i).$ By construction, one has for every $w$,
1230 \begin{equation}\label{PCH-1}
1231 D^\prime(w)=D(\varphi_y(w)),
1233 where $y$ is randomly generated.
1234 Moreover, for each $y$, $\varphi_{y}$ is injective: if
1235 $(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y\bigoplus_{i=1}^{i=k_1}
1236 w_i)=(y\oplus w_1^\prime)(y\oplus w_1^\prime\oplus w_2^\prime)\ldots
1237 (y\bigoplus_{i=1}^{i=k} w_i^\prime)$, then for every $1\leq j\leq k$,
1238 $y\bigoplus_{i=1}^{i=j} w_i^\prime=y\bigoplus_{i=1}^{i=j} w_i$. It follows,
1239 by a direct induction, that $w_i=w_i^\prime$. Furthermore, since $\mathbb{B}^{kN}$
1240 is finite, each $\varphi_y$ is bijective. Therefore, and using (\ref{PCH-1}),
1242 $\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(\varphi_y(U_{kN}))=1]$ and,
1244 \begin{equation}\label{PCH-2}
1245 \mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(U_{kN})=1].
1248 Now, using (\ref{PCH-1}) again, one has for every $x$,
1249 \begin{equation}\label{PCH-3}
1250 D^\prime(H(x))=D(\varphi_y(H(x))),
1252 where $y$ is randomly generated. By construction, $\varphi_y(H(x))=X(yx)$,
1254 \begin{equation}%\label{PCH-3} %%RAPH : j'ai viré ce label qui existe déjà, il est 3 ligne avant
1255 D^\prime(H(x))=D(yx),
1257 where $y$ is randomly generated.
1260 \begin{equation}\label{PCH-4}
1261 \mathrm{Pr}[D^\prime(H(U_{N}))=1]=\mathrm{Pr}[D(U_{2N})=1].
1263 From (\ref{PCH-2}) and (\ref{PCH-4}), one can deduce that
1264 there exists a polynomial time probabilistic
1265 algorithm $D^\prime$, a positive polynomial $p$, such that for all $k_0$ there exists
1266 $N\geq \frac{k_0}{2}$ satisfying
1267 $$| \mathrm{Pr}[D(H(U_{N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)},$$
1268 proving that $H$ is not secure, which is a contradiction.
1272 \section{Cryptographical Applications}
1274 \subsection{A Cryptographically Secure PRNG for GPU}
1277 It is possible to build a cryptographically secure PRNG based on the previous
1278 algorithm (Algorithm~\ref{algo:gpu_kernel2}). Due to Proposition~\ref{cryptopreuve},
1279 it simply consists in replacing
1280 the {\it xor-like} PRNG by a cryptographically secure one.
1281 We have chosen the Blum Blum Shum generator~\cite{BBS} (usually denoted by BBS) having the form:
1282 $$x_{n+1}=x_n^2~ mod~ M$$ where $M$ is the product of two prime numbers (these
1283 prime numbers need to be congruent to 3 modulus 4). BBS is known to be
1284 very slow and only usable for cryptographic applications.
1287 The modulus operation is the most time consuming operation for current
1288 GPU cards. So in order to obtain quite reasonable performances, it is
1289 required to use only modulus on 32-bits integer numbers. Consequently
1290 $x_n^2$ need to be lesser than $2^{32}$, and thus the number $M$ must be
1291 lesser than $2^{16}$. So in practice we can choose prime numbers around
1292 256 that are congruent to 3 modulus 4. With 32-bits numbers, only the
1293 4 least significant bits of $x_n$ can be chosen (the maximum number of
1294 indistinguishable bits is lesser than or equals to
1295 $log_2(log_2(M))$). In other words, to generate a 32-bits number, we need to use
1296 8 times the BBS algorithm with possibly different combinations of $M$. This
1297 approach is not sufficient to be able to pass all the tests of TestU01,
1298 as small values of $M$ for the BBS lead to
1299 small periods. So, in order to add randomness we have proceeded with
1300 the followings modifications.
1303 Firstly, we define 16 arrangement arrays instead of 2 (as described in
1304 Algorithm \ref{algo:gpu_kernel2}), but only 2 of them are used at each call of
1305 the PRNG kernels. In practice, the selection of combination
1306 arrays to be used is different for all the threads. It is determined
1307 by using the three last bits of two internal variables used by BBS.
1308 %This approach adds more randomness.
1309 In Algorithm~\ref{algo:bbs_gpu},
1310 character \& is for the bitwise AND. Thus using \&7 with a number
1311 gives the last 3 bits, thus providing a number between 0 and 7.
1313 Secondly, after the generation of the 8 BBS numbers for each thread, we
1314 have a 32-bits number whose period is possibly quite small. So
1315 to add randomness, we generate 4 more BBS numbers to
1316 shift the 32-bits numbers, and add up to 6 new bits. This improvement is
1317 described in Algorithm~\ref{algo:bbs_gpu}. In practice, the last 2 bits
1318 of the first new BBS number are used to make a left shift of at most
1319 3 bits. The last 3 bits of the second new BBS number are added to the
1320 strategy whatever the value of the first left shift. The third and the
1321 fourth new BBS numbers are used similarly to apply a new left shift
1324 Finally, as we use 8 BBS numbers for each thread, the storage of these
1325 numbers at the end of the kernel is performed using a rotation. So,
1326 internal variable for BBS number 1 is stored in place 2, internal
1327 variable for BBS number 2 is stored in place 3, ..., and finally, internal
1328 variable for BBS number 8 is stored in place 1.
1333 \KwIn{InternalVarBBSArray: array with internal variables of the 8 BBS
1335 NumThreads: Number of threads\;
1336 array\_comb: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\;
1337 array\_shift[4]=\{0,1,3,7\}\;
1340 \KwOut{NewNb: array containing random numbers in global memory}
1341 \If{threadId is concerned} {
1342 retrieve data from InternalVarBBSArray[threadId] in local variables including shared memory and x\;
1343 we consider that bbs1 ... bbs8 represent the internal states of the 8 BBS numbers\;
1344 offset = threadIdx\%combination\_size\;
1345 o1 = threadIdx-offset+array\_comb[bbs1\&7][offset]\;
1346 o2 = threadIdx-offset+array\_comb[8+bbs2\&7][offset]\;
1353 \tcp{two new shifts}
1354 shift=BBS3(bbs3)\&3\;
1356 t|=BBS1(bbs1)\&array\_shift[shift]\;
1357 shift=BBS7(bbs7)\&3\;
1359 t|=BBS2(bbs2)\&array\_shift[shift]\;
1360 t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
1361 shared\_mem[threadId]=t\;
1362 x = x\textasciicircum t\;
1364 store the new PRNG in NewNb[NumThreads*threadId+i]\;
1366 store internal variables in InternalVarXorLikeArray[threadId] using a rotation\;
1369 \caption{main kernel for the BBS based PRNG GPU}
1370 \label{algo:bbs_gpu}
1373 In Algorithm~\ref{algo:bbs_gpu}, $n$ is for the quantity of random numbers that
1374 a thread has to generate. The operation t<<=4 performs a left shift of 4 bits
1375 on the variable $t$ and stores the result in $t$, and $BBS1(bbs1)\&15$ selects
1376 the last four bits of the result of $BBS1$. Thus an operation of the form
1377 $t<<=4; t|=BBS1(bbs1)\&15\;$ realizes in $t$ a left shift of 4 bits, and then
1378 puts the 4 last bits of $BBS1(bbs1)$ in the four last positions of $t$. Let us
1379 remark that the initialization $t$ is not a necessity as we fill it 4 bits by 4
1380 bits, until having obtained 32-bits. The two last new shifts are realized in
1381 order to enlarge the small periods of the BBS used here, to introduce a kind of
1382 variability. In these operations, we make twice a left shift of $t$ of \emph{at
1383 most} 3 bits, represented by \texttt{shift} in the algorithm, and we put
1384 \emph{exactly} the \texttt{shift} last bits from a BBS into the \texttt{shift}
1385 last bits of $t$. For this, an array named \texttt{array\_shift}, containing the
1386 correspondence between the shift and the number obtained with \texttt{shift} 1
1387 to make the \texttt{and} operation is used. For example, with a left shift of 0,
1388 we make an and operation with 0, with a left shift of 3, we make an and
1389 operation with 7 (represented by 111 in binary mode).
1391 It should be noticed that this generator has once more the form $x^{n+1} = x^n \oplus S^n$,
1392 where $S^n$ is referred in this algorithm as $t$: each iteration of this
1393 PRNG ends with $x = x \wedge t$. This $S^n$ is only constituted
1394 by secure bits produced by the BBS generator, and thus, due to
1395 Proposition~\ref{cryptopreuve}, the resulted PRNG is cryptographically
1400 \subsection{Toward a Cryptographically Secure and Chaotic Asymmetric Cryptosystem}
1401 \label{Blum-Goldwasser}
1402 We finish this research work by giving some thoughts about the use of
1403 the proposed PRNG in an asymmetric cryptosystem.
1404 This first approach will be further investigated in a future work.
1406 \subsubsection{Recalls of the Blum-Goldwasser Probabilistic Cryptosystem}
1408 The Blum-Goldwasser cryptosystem is a cryptographically secure asymmetric key encryption algorithm
1409 proposed in 1984~\cite{Blum:1985:EPP:19478.19501}. The encryption algorithm
1410 implements a XOR-based stream cipher using the BBS PRNG, in order to generate
1411 the keystream. Decryption is done by obtaining the initial seed thanks to
1412 the final state of the BBS generator and the secret key, thus leading to the
1413 reconstruction of the keystream.
1415 The key generation consists in generating two prime numbers $(p,q)$,
1416 randomly and independently of each other, that are
1417 congruent to 3 mod 4, and to compute the modulus $N=pq$.
1418 The public key is $N$, whereas the secret key is the factorization $(p,q)$.
1421 Suppose Bob wishes to send a string $m=(m_0, \dots, m_{L-1})$ of $L$ bits to Alice:
1423 \item Bob picks an integer $r$ randomly in the interval $\llbracket 1,N\rrbracket$ and computes $x_0 = r^2~mod~N$.
1424 \item He uses the BBS to generate the keystream of $L$ pseudorandom bits $(b_0, \dots, b_{L-1})$, as follows. For $i=0$ to $L-1$,
1427 \item While $i \leqslant L-1$:
1429 \item Set $b_i$ equal to the least-significant\footnote{As signaled previously, BBS can securely output up to $\mathsf{N} = \lfloor log(log(N)) \rfloor$ of the least-significant bits of $x_i$ during each round.} bit of $x_i$,
1431 \item $x_i = (x_{i-1})^2~mod~N.$
1434 \item The ciphertext is computed by XORing the plaintext bits $m$ with the keystream: $ c = (c_0, \dots, c_{L-1}) = m \oplus b$. This ciphertext is $[c, y]$, where $y=x_{0}^{2^{L}}~mod~N.$
1438 When Alice receives $\left[(c_0, \dots, c_{L-1}), y\right]$, she can recover $m$ as follows:
1440 \item Using the secret key $(p,q)$, she computes $r_p = y^{((p+1)/4)^{L}}~mod~p$ and $r_q = y^{((q+1)/4)^{L}}~mod~q$.
1441 \item The initial seed can be obtained using the following procedure: $x_0=q(q^{-1}~{mod}~p)r_p + p(p^{-1}~{mod}~q)r_q~{mod}~N$.
1442 \item She recomputes the bit-vector $b$ by using BBS and $x_0$.
1443 \item Alice finally computes the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$.
1447 \subsubsection{Proposal of a new Asymmetric Cryptosystem Adapted from Blum-Goldwasser}
1449 We propose to adapt the Blum-Goldwasser protocol as follows.
1450 Let $\mathsf{N} = \lfloor log(log(N)) \rfloor$ be the number of bits that can
1451 be obtained securely with the BBS generator using the public key $N$ of Alice.
1452 Alice will pick randomly $S^0$ in $\llbracket 0, 2^{\mathsf{N}-1}\rrbracket$ too, and
1453 her new public key will be $(S^0, N)$.
1455 To encrypt his message, Bob will compute
1456 %%RAPH : ici, j'ai mis un simple $
1458 $c = \left(m_0 \oplus (b_0 \oplus S^0), m_1 \oplus (b_0 \oplus b_1 \oplus S^0), \hdots, \right.$
1459 $ \left. m_{L-1} \oplus (b_0 \oplus b_1 \hdots \oplus b_{L-1} \oplus S^0) \right)$
1461 instead of $\left(m_0 \oplus b_0, m_1 \oplus b_1, \hdots, m_{L-1} \oplus b_{L-1} \right)$.
1463 The same decryption stage as in Blum-Goldwasser leads to the sequence
1464 $\left(m_0 \oplus S^0, m_1 \oplus S^0, \hdots, m_{L-1} \oplus S^0 \right)$.
1465 Thus, with a simple use of $S^0$, Alice can obtain the plaintext.
1466 By doing so, the proposed generator is used in place of BBS, leading to
1467 the inheritance of all the properties presented in this paper.
1469 \section{Conclusion}
1472 In this paper, a formerly proposed PRNG based on chaotic iterations
1473 has been generalized to improve its speed. It has been proven to be
1474 chaotic according to Devaney.
1475 Efficient implementations on GPU using xor-like PRNGs as input generators
1476 have shown that a very large quantity of pseudorandom numbers can be generated per second (about
1477 20Gsamples/s), and that these proposed PRNGs succeed to pass the hardest battery in TestU01,
1478 namely the BigCrush.
1479 Furthermore, we have shown that when the inputted generator is cryptographically
1480 secure, then it is the case too for the PRNG we propose, thus leading to
1481 the possibility to develop fast and secure PRNGs using the GPU architecture.
1482 Thoughts about an improvement of the Blum-Goldwasser cryptosystem, using the
1483 proposed method, has been finally proposed.
1485 In future work we plan to extend these researches, building a parallel PRNG for clusters or
1486 grid computing. Topological properties of the various proposed generators will be investigated,
1487 and the use of other categories of PRNGs as input will be studied too. The improvement
1488 of Blum-Goldwasser will be deepened. Finally, we
1489 will try to enlarge the quantity of pseudorandom numbers generated per second either
1490 in a simulation context or in a cryptographic one.
1494 \bibliographystyle{plain}
1495 \bibliography{mabase}