1 %\documentclass{article}
2 \documentclass[10pt,journal,letterpaper,compsoc]{IEEEtran}
3 \usepackage[utf8]{inputenc}
4 \usepackage[T1]{fontenc}
11 \usepackage[ruled,vlined]{algorithm2e}
13 \usepackage[standard]{ntheorem}
14 \usepackage{algorithmic}
17 % Pour mathds : les ensembles IR, IN, etc.
20 % Pour avoir des intervalles d'entiers
24 % Pour faire des sous-figures dans les figures
25 \usepackage{subfigure}
29 \newtheorem{notation}{Notation}
31 \newcommand{\X}{\mathcal{X}}
32 \newcommand{\Go}{G_{f_0}}
33 \newcommand{\B}{\mathds{B}}
34 \newcommand{\N}{\mathds{N}}
35 \newcommand{\BN}{\mathds{B}^\mathsf{N}}
38 \newcommand{\alert}[1]{\begin{color}{blue}\textit{#1}\end{color}}
40 \title{Efficient and Cryptographically Secure Generation of Chaotic Pseudorandom Numbers on GPU}
43 \author{Jacques M. Bahi, Rapha\"{e}l Couturier, Christophe
44 Guyeux, and Pierre-Cyrille Héam\thanks{Authors in alphabetic order}}
47 \IEEEcompsoctitleabstractindextext{
49 In this paper we present a new pseudorandom number generator (PRNG) on
50 graphics processing units (GPU). This PRNG is based on the so-called chaotic iterations. It
51 is firstly proven to be chaotic according to the Devaney's formulation. We thus propose an efficient
52 implementation for GPU that successfully passes the {\it BigCrush} tests, deemed to be the hardest
53 battery of tests in TestU01. Experiments show that this PRNG can generate
54 about 20 billion of random numbers per second on Tesla C1060 and NVidia GTX280
56 It is then established that, under reasonable assumptions, the proposed PRNG can be cryptographically
58 A chaotic version of the Blum-Goldwasser asymmetric key encryption scheme is finally proposed.
66 \IEEEdisplaynotcompsoctitleabstractindextext
67 \IEEEpeerreviewmaketitle
70 \section{Introduction}
72 Randomness is of importance in many fields such as scientific simulations or cryptography.
73 ``Random numbers'' can mainly be generated either by a deterministic and reproducible algorithm
74 called a pseudorandom number generator (PRNG), or by a physical non-deterministic
75 process having all the characteristics of a random noise, called a truly random number
77 In this paper, we focus on reproducible generators, useful for instance in
78 Monte-Carlo based simulators or in several cryptographic schemes.
79 These domains need PRNGs that are statistically irreproachable.
80 In some fields such as in numerical simulations, speed is a strong requirement
81 that is usually attained by using parallel architectures. In that case,
82 a recurrent problem is that a deflation of the statistical qualities is often
83 reported, when the parallelization of a good PRNG is realized.
84 This is why ad-hoc PRNGs for each possible architecture must be found to
85 achieve both speed and randomness.
86 On the other side, speed is not the main requirement in cryptography: the great
87 need is to define \emph{secure} generators able to withstand malicious
88 attacks. Roughly speaking, an attacker should not be able in practice to make
89 the distinction between numbers obtained with the secure generator and a true random
91 Finally, a small part of the community working in this domain focuses on a
92 third requirement, that is to define chaotic generators.
93 The main idea is to take benefits from a chaotic dynamical system to obtain a
94 generator that is unpredictable, disordered, sensible to its seed, or in other word chaotic.
95 Their desire is to map a given chaotic dynamics into a sequence that seems random
96 and unassailable due to chaos.
97 However, the chaotic maps used as a pattern are defined in the real line
98 whereas computers deal with finite precision numbers.
99 This distortion leads to a deflation of both chaotic properties and speed.
100 Furthermore, authors of such chaotic generators often claim their PRNG
101 as secure due to their chaos properties, but there is no obvious relation
102 between chaos and security as it is understood in cryptography.
103 This is why the use of chaos for PRNG still remains marginal and disputable.
105 The authors' opinion is that topological properties of disorder, as they are
106 properly defined in the mathematical theory of chaos, can reinforce the quality
107 of a PRNG. But they are not substitutable for security or statistical perfection.
108 Indeed, to the authors' mind, such properties can be useful in the two following situations. On the
109 one hand, a post-treatment based on a chaotic dynamical system can be applied
110 to a PRNG statistically deflective, in order to improve its statistical
111 properties. Such an improvement can be found, for instance, in~\cite{bgw09:ip,bcgr11:ip}.
112 On the other hand, chaos can be added to a fast, statistically perfect PRNG and/or a
113 cryptographically secure one, in case where chaos can be of interest,
114 \emph{only if these last properties are not lost during
115 the proposed post-treatment}. Such an assumption is behind this research work.
116 It leads to the attempts to define a
117 family of PRNGs that are chaotic while being fast and statistically perfect,
118 or cryptographically secure.
119 Let us finish this paragraph by noticing that, in this paper,
120 statistical perfection refers to the ability to pass the whole
121 {\it BigCrush} battery of tests, which is widely considered as the most
122 stringent statistical evaluation of a sequence claimed as random.
123 This battery can be found in the well-known TestU01 package~\cite{LEcuyerS07}.
124 Chaos, for its part, refers to the well-established definition of a
125 chaotic dynamical system proposed by Devaney~\cite{Devaney}.
128 In a previous work~\cite{bgw09:ip,guyeux10} we have proposed a post-treatment on PRNGs making them behave
129 as a chaotic dynamical system. Such a post-treatment leads to a new category of
130 PRNGs. We have shown that proofs of Devaney's chaos can be established for this
131 family, and that the sequence obtained after this post-treatment can pass the
132 NIST~\cite{Nist10}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} batteries of tests, even if the inputted generators
134 The proposition of this paper is to improve widely the speed of the formerly
135 proposed generator, without any lack of chaos or statistical properties.
136 In particular, a version of this PRNG on graphics processing units (GPU)
138 Although GPU was initially designed to accelerate
139 the manipulation of images, they are nowadays commonly used in many scientific
140 applications. Therefore, it is important to be able to generate pseudorandom
141 numbers inside a GPU when a scientific application runs in it. This remark
142 motivates our proposal of a chaotic and statistically perfect PRNG for GPU.
144 allows us to generate almost 20 billion of pseudorandom numbers per second.
145 Furthermore, we show that the proposed post-treatment preserves the
146 cryptographical security of the inputted PRNG, when this last has such a
148 Last, but not least, we propose a rewriting of the Blum-Goldwasser asymmetric
149 key encryption protocol by using the proposed method.
151 The remainder of this paper is organized as follows. In Section~\ref{section:related
152 works} we review some GPU implementations of PRNGs. Section~\ref{section:BASIC
153 RECALLS} gives some basic recalls on the well-known Devaney's formulation of chaos,
154 and on an iteration process called ``chaotic
155 iterations'' on which the post-treatment is based.
156 The proposed PRNG and its proof of chaos are given in Section~\ref{sec:pseudorandom}.
157 Section~\ref{sec:efficient PRNG} presents an efficient
158 implementation of this chaotic PRNG on a CPU, whereas Section~\ref{sec:efficient PRNG
159 gpu} describes and evaluates theoretically the GPU implementation.
160 Such generators are experimented in
161 Section~\ref{sec:experiments}.
162 We show in Section~\ref{sec:security analysis} that, if the inputted
163 generator is cryptographically secure, then it is the case too for the
164 generator provided by the post-treatment.
165 Such a proof leads to the proposition of a cryptographically secure and
166 chaotic generator on GPU based on the famous Blum Blum Shub
167 in Section~\ref{sec:CSGPU}, and to an improvement of the
168 Blum-Goldwasser protocol in Sect.~\ref{Blum-Goldwasser}.
169 This research work ends by a conclusion section, in which the contribution is
170 summarized and intended future work is presented.
175 \section{Related works on GPU based PRNGs}
176 \label{section:related works}
178 Numerous research works on defining GPU based PRNGs have already been proposed in the
179 literature, so that exhaustivity is impossible.
180 This is why authors of this document only give reference to the most significant attempts
181 in this domain, from their subjective point of view.
182 The quantity of pseudorandom numbers generated per second is mentioned here
183 only when the information is given in the related work.
184 A million numbers per second will be simply written as
185 1MSample/s whereas a billion numbers per second is 1GSample/s.
187 In \cite{Pang:2008:cec} a PRNG based on cellular automata is defined
188 with no requirement to an high precision integer arithmetic or to any bitwise
189 operations. Authors can generate about
190 3.2MSamples/s on a GeForce 7800 GTX GPU, which is quite an old card now.
191 However, there is neither a mention of statistical tests nor any proof of
192 chaos or cryptography in this document.
194 In \cite{ZRKB10}, the authors propose different versions of efficient GPU PRNGs
195 based on Lagged Fibonacci or Hybrid Taus. They have used these
196 PRNGs for Langevin simulations of biomolecules fully implemented on
197 GPU. Performances of the GPU versions are far better than those obtained with a
198 CPU, and these PRNGs succeed to pass the {\it BigCrush} battery of TestU01.
199 However the evaluations of the proposed PRNGs are only statistical ones.
202 Authors of~\cite{conf/fpga/ThomasHL09} have studied the implementation of some
203 PRNGs on different computing architectures: CPU, field-programmable gate array
204 (FPGA), massively parallel processors, and GPU. This study is of interest, because
205 the performance of the same PRNGs on different architectures are compared.
206 FPGA appears as the fastest and the most
207 efficient architecture, providing the fastest number of generated pseudorandom numbers
209 However, we notice that authors can ``only'' generate between 11 and 16GSamples/s
210 with a GTX 280 GPU, which should be compared with
211 the results presented in this document.
212 We can remark too that the PRNGs proposed in~\cite{conf/fpga/ThomasHL09} are only
213 able to pass the {\it Crush} battery, which is far easier than the {\it Big Crush} one.
215 Lastly, Cuda has developed a library for the generation of pseudorandom numbers called
216 Curand~\cite{curand11}. Several PRNGs are implemented, among
218 Xorwow~\cite{Marsaglia2003} and some variants of Sobol. The tests reported show that
219 their fastest version provides 15GSamples/s on the new Fermi C2050 card.
220 But their PRNGs cannot pass the whole TestU01 battery (only one test is failed).
223 We can finally remark that, to the best of our knowledge, no GPU implementation has been proven to be chaotic, and the cryptographically secure property has surprisingly never been considered.
225 \section{Basic Recalls}
226 \label{section:BASIC RECALLS}
228 This section is devoted to basic definitions and terminologies in the fields of
229 topological chaos and chaotic iterations. We assume the reader is familiar
230 with basic notions on topology (see for instance~\cite{Devaney}).
233 \subsection{Devaney's Chaotic Dynamical Systems}
235 In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and $V_{i}$
236 denotes the $i^{th}$ component of a vector $V$. $f^{k}=f\circ ...\circ f$
237 is for the $k^{th}$ composition of a function $f$. Finally, the following
238 notation is used: $\llbracket1;N\rrbracket=\{1,2,\hdots,N\}$.
241 Consider a topological space $(\mathcal{X},\tau)$ and a continuous function $f :
242 \mathcal{X} \rightarrow \mathcal{X}$.
245 The function $f$ is said to be \emph{topologically transitive} if, for any pair of open sets
246 $U,V \subset \mathcal{X}$, there exists $k>0$ such that $f^k(U) \cap V \neq
251 An element $x$ is a \emph{periodic point} for $f$ of period $n\in \mathds{N}^*$
252 if $f^{n}(x)=x$.% The set of periodic points of $f$ is denoted $Per(f).$
256 $f$ is said to be \emph{regular} on $(\mathcal{X}, \tau)$ if the set of periodic
257 points for $f$ is dense in $\mathcal{X}$: for any point $x$ in $\mathcal{X}$,
258 any neighborhood of $x$ contains at least one periodic point (without
259 necessarily the same period).
263 \begin{definition}[Devaney's formulation of chaos~\cite{Devaney}]
264 The function $f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and
265 topologically transitive.
268 The chaos property is strongly linked to the notion of ``sensitivity'', defined
269 on a metric space $(\mathcal{X},d)$ by:
272 \label{sensitivity} The function $f$ has \emph{sensitive dependence on initial conditions}
273 if there exists $\delta >0$ such that, for any $x\in \mathcal{X}$ and any
274 neighborhood $V$ of $x$, there exist $y\in V$ and $n > 0$ such that
275 $d\left(f^{n}(x), f^{n}(y)\right) >\delta $.
277 The constant $\delta$ is called the \emph{constant of sensitivity} of $f$.
280 Indeed, Banks \emph{et al.} have proven in~\cite{Banks92} that when $f$ is
281 chaotic and $(\mathcal{X}, d)$ is a metric space, then $f$ has the property of
282 sensitive dependence on initial conditions (this property was formerly an
283 element of the definition of chaos). To sum up, quoting Devaney
284 in~\cite{Devaney}, a chaotic dynamical system ``is unpredictable because of the
285 sensitive dependence on initial conditions. It cannot be broken down or
286 simplified into two subsystems which do not interact because of topological
287 transitivity. And in the midst of this random behavior, we nevertheless have an
288 element of regularity''. Fundamentally different behaviors are consequently
289 possible and occur in an unpredictable way.
293 \subsection{Chaotic Iterations}
294 \label{sec:chaotic iterations}
297 Let us consider a \emph{system} with a finite number $\mathsf{N} \in
298 \mathds{N}^*$ of elements (or \emph{cells}), so that each cell has a
299 Boolean \emph{state}. Having $\mathsf{N}$ Boolean values for these
300 cells leads to the definition of a particular \emph{state of the
301 system}. A sequence which elements belong to $\llbracket 1;\mathsf{N}
302 \rrbracket $ is called a \emph{strategy}. The set of all strategies is
303 denoted by $\llbracket 1, \mathsf{N} \rrbracket^\mathds{N}.$
306 \label{Def:chaotic iterations}
307 The set $\mathds{B}$ denoting $\{0,1\}$, let
308 $f:\mathds{B}^{\mathsf{N}}\longrightarrow \mathds{B}^{\mathsf{N}}$ be
309 a function and $S\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ be a ``strategy''. The so-called
310 \emph{chaotic iterations} are defined by $x^0\in
311 \mathds{B}^{\mathsf{N}}$ and
313 \forall n\in \mathds{N}^{\ast }, \forall i\in
314 \llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
316 x_i^{n-1} & \text{ if }S^n\neq i \\
317 \left(f(x^{n-1})\right)_{S^n} & \text{ if }S^n=i.
322 In other words, at the $n^{th}$ iteration, only the $S^{n}-$th cell is
323 \textquotedblleft iterated\textquotedblright . Note that in a more
324 general formulation, $S^n$ can be a subset of components and
325 $\left(f(x^{n-1})\right)_{S^{n}}$ can be replaced by
326 $\left(f(x^{k})\right)_{S^{n}}$, where $k<n$, describing for example,
327 delays transmission~\cite{Robert1986,guyeux10}. Finally, let us remark that
328 the term ``chaotic'', in the name of these iterations, has \emph{a
329 priori} no link with the mathematical theory of chaos, presented above.
332 Let us now recall how to define a suitable metric space where chaotic iterations
333 are continuous. For further explanations, see, e.g., \cite{guyeux10}.
335 Let $\delta $ be the \emph{discrete Boolean metric}, $\delta
336 (x,y)=0\Leftrightarrow x=y.$ Given a function $f$, define the function
337 $F_{f}: \llbracket1;\mathsf{N}\rrbracket\times \mathds{B}^{\mathsf{N}}
338 \longrightarrow \mathds{B}^{\mathsf{N}}$
341 & (k,E) & \longmapsto & \left( E_{j}.\delta (k,j)+ f(E)_{k}.\overline{\delta
342 (k,j)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket}%
345 \noindent where + and . are the Boolean addition and product operations.
346 Consider the phase space:
348 \mathcal{X} = \llbracket 1 ; \mathsf{N} \rrbracket^\mathds{N} \times
349 \mathds{B}^\mathsf{N},
351 \noindent and the map defined on $\mathcal{X}$:
353 G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), \label{Gf}
355 \noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
356 (S^{n})_{n\in \mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow (S^{n+1})_{n\in
357 \mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ and $i$ is the \emph{initial function}
358 $i:(S^{n})_{n\in \mathds{N}} \in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow S^{0}\in \llbracket
359 1;\mathsf{N}\rrbracket$. Then the chaotic iterations proposed in
360 Definition \ref{Def:chaotic iterations} can be described by the following iterations:
364 X^0 \in \mathcal{X} \\
370 With this formulation, a shift function appears as a component of chaotic
371 iterations. The shift function is a famous example of a chaotic
372 map~\cite{Devaney} but its presence is not sufficient enough to claim $G_f$ as
374 To study this claim, a new distance between two points $X = (S,E), Y =
375 (\check{S},\check{E})\in
376 \mathcal{X}$ has been introduced in \cite{guyeux10} as follows:
378 d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
384 \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
385 }\delta (E_{k},\check{E}_{k})}, \\
386 \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
387 \sum_{k=1}^{\infty }\dfrac{|S^k-\check{S}^k|}{10^{k}}}.%
393 This new distance has been introduced to satisfy the following requirements.
395 \item When the number of different cells between two systems is increasing, then
396 their distance should increase too.
397 \item In addition, if two systems present the same cells and their respective
398 strategies start with the same terms, then the distance between these two points
399 must be small because the evolution of the two systems will be the same for a
400 while. Indeed, both dynamical systems start with the same initial condition,
401 use the same update function, and as strategies are the same for a while, furthermore
402 updated components are the same as well.
404 The distance presented above follows these recommendations. Indeed, if the floor
405 value $\lfloor d(X,Y)\rfloor $ is equal to $n$, then the systems $E, \check{E}$
406 differ in $n$ cells ($d_e$ is indeed the Hamming distance). In addition, $d(X,Y) - \lfloor d(X,Y) \rfloor $ is a
407 measure of the differences between strategies $S$ and $\check{S}$. More
408 precisely, this floating part is less than $10^{-k}$ if and only if the first
409 $k$ terms of the two strategies are equal. Moreover, if the $k^{th}$ digit is
410 nonzero, then the $k^{th}$ terms of the two strategies are different.
411 The impact of this choice for a distance will be investigated at the end of the document.
413 Finally, it has been established in \cite{guyeux10} that,
416 Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. Then $G_{f}$ is continuous in
417 the metric space $(\mathcal{X},d)$.
420 The chaotic property of $G_f$ has been firstly established for the vectorial
421 Boolean negation $f_0(x_1,\hdots, x_\mathsf{N}) = (\overline{x_1},\hdots, \overline{x_\mathsf{N}})$ \cite{guyeux10}. To obtain a characterization, we have secondly
422 introduced the notion of asynchronous iteration graph recalled bellow.
424 Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. The
425 {\emph{asynchronous iteration graph}} associated with $f$ is the
426 directed graph $\Gamma(f)$ defined by: the set of vertices is
427 $\mathds{B}^\mathsf{N}$; for all $x\in\mathds{B}^\mathsf{N}$ and
428 $i\in \llbracket1;\mathsf{N}\rrbracket$,
429 the graph $\Gamma(f)$ contains an arc from $x$ to $F_f(i,x)$.
430 The relation between $\Gamma(f)$ and $G_f$ is clear: there exists a
431 path from $x$ to $x'$ in $\Gamma(f)$ if and only if there exists a
432 strategy $s$ such that the parallel iteration of $G_f$ from the
433 initial point $(s,x)$ reaches the point $x'$.
434 We have then proven in \cite{bcgr11:ip} that,
438 \label{Th:Caractérisation des IC chaotiques}
439 Let $f:\mathds{B}^\mathsf{N}\to\mathds{B}^\mathsf{N}$. $G_f$ is chaotic (according to Devaney)
440 if and only if $\Gamma(f)$ is strongly connected.
443 Finally, we have established in \cite{bcgr11:ip} that,
445 Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its
446 iteration graph, $\check{M}$ its adjacency
448 a $n\times n$ matrix defined by
450 M_{ij} = \frac{1}{n}\check{M}_{ij}$ %\textrm{
452 $M_{ii} = 1 - \frac{1}{n} \sum\limits_{j=1, j\neq i}^n \check{M}_{ij}$ otherwise.
454 If $\Gamma(f)$ is strongly connected, then
455 the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows
456 a law that tends to the uniform distribution
457 if and only if $M$ is a double stochastic matrix.
461 These results of chaos and uniform distribution have led us to study the possibility of building a
462 pseudorandom number generator (PRNG) based on the chaotic iterations.
463 As $G_f$, defined on the domain $\llbracket 1 ; \mathsf{N} \rrbracket^{\mathds{N}}
464 \times \mathds{B}^\mathsf{N}$, is built from Boolean networks $f : \mathds{B}^\mathsf{N}
465 \rightarrow \mathds{B}^\mathsf{N}$, we can preserve the theoretical properties on $G_f$
466 during implementations (due to the discrete nature of $f$). Indeed, it is as if
467 $\mathds{B}^\mathsf{N}$ represents the memory of the computer whereas $\llbracket 1 ; \mathsf{N}
468 \rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance, in PRNG, or a physical noise in TRNG).
469 Let us finally remark that the vectorial negation satisfies the hypotheses of both theorems above.
471 \section{Application to Pseudorandomness}
472 \label{sec:pseudorandom}
474 \subsection{A First Pseudorandom Number Generator}
476 We have proposed in~\cite{bgw09:ip} a new family of generators that receives
477 two PRNGs as inputs. These two generators are mixed with chaotic iterations,
478 leading thus to a new PRNG that
480 should improves the statistical properties of each
481 generator taken alone.
482 Furthermore, the generator obtained by this way possesses various chaos properties that none of the generators used as input
487 \begin{algorithm}[h!]
489 \KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$
491 \KwOut{a configuration $x$ ($n$ bits)}
493 $k\leftarrow b + PRNG_1(b)$\;
496 $s\leftarrow{PRNG_2(n)}$\;
497 $x\leftarrow{F_f(s,x)}$\;
501 \caption{An arbitrary round of $Old~ CI~ PRNG_f(PRNG_1,PRNG_2)$}
508 This generator is synthesized in Algorithm~\ref{CI Algorithm}.
509 It takes as input: a Boolean function $f$ satisfying Theorem~\ref{Th:Caractérisation des IC chaotiques};
510 an integer $b$, ensuring that the number of executed iterations
511 between two outputs is at least $b$
512 and at most $2b+1$; and an initial configuration $x^0$.
513 It returns the new generated configuration $x$. Internally, it embeds two
514 inputted generators $PRNG_i(k), i=1,2$,
515 which must return integers
516 uniformly distributed
517 into $\llbracket 1 ; k \rrbracket$.
518 For instance, these PRNGs can be the \textit{XORshift}~\cite{Marsaglia2003},
519 being a category of very fast PRNGs designed by George Marsaglia
520 that repeatedly uses the transform of exclusive or (XOR, $\oplus$) on a number
521 with a bit shifted version of it. Such a PRNG, which has a period of
522 $2^{32}-1=4.29\times10^9$, is summed up in Algorithm~\ref{XORshift}.
523 This XORshift, or any other reasonable PRNG, is used
524 in our own generator to compute both the number of iterations between two
525 outputs (provided by $PRNG_1$) and the strategy elements ($PRNG_2$).
527 %This former generator has successively passed various batteries of statistical tests, as the NIST~\cite{bcgr11:ip}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} ones.
530 \begin{algorithm}[h!]
532 \KwIn{the internal configuration $z$ (a 32-bit word)}
533 \KwOut{$y$ (a 32-bit word)}
534 $z\leftarrow{z\oplus{(z\ll13)}}$\;
535 $z\leftarrow{z\oplus{(z\gg17)}}$\;
536 $z\leftarrow{z\oplus{(z\ll5)}}$\;
540 \caption{An arbitrary round of \textit{XORshift} algorithm}
545 \subsection{A ``New CI PRNG''}
547 In order to make the Old CI PRNG usable in practice, we have proposed
548 an adapted version of the chaotic iteration based generator in~\cite{bg10:ip}.
549 In this ``New CI PRNG'', we prevent from changing twice a given
550 bit between two outputs.
551 This new generator is designed by the following process.
553 First of all, some chaotic iterations have to be done to generate a sequence
554 $\left(x^n\right)_{n\in\mathds{N}} \in \left(\mathds{B}^{32}\right)^\mathds{N}$
555 of Boolean vectors, which are the successive states of the iterated system.
556 Some of these vectors will be randomly extracted and our pseudo-random bit
557 flow will be constituted by their components. Such chaotic iterations are
558 realized as follows. Initial state $x^0 \in \mathds{B}^{32}$ is a Boolean
559 vector taken as a seed and chaotic strategy $\left(S^n\right)_{n\in\mathds{N}}\in
560 \llbracket 1, 32 \rrbracket^\mathds{N}$ is
561 an \emph{irregular decimation} of $PRNG_2$ sequence, as described in
562 Algorithm~\ref{Chaotic iteration1}.
564 Then, at each iteration, only the $S^n$-th component of state $x^n$ is
565 updated, as follows: $x_i^n = x_i^{n-1}$ if $i \neq S^n$, else $x_i^n = \overline{x_i^{n-1}}$.
566 Such a procedure is equivalent to achieve chaotic iterations with
567 the Boolean vectorial negation $f_0$ and some well-chosen strategies.
568 Finally, some $x^n$ are selected
569 by a sequence $m^n$ as the pseudo-random bit sequence of our generator.
570 $(m^n)_{n \in \mathds{N}} \in \mathcal{M}^\mathds{N}$ is computed from $PRNG_1$, where $\mathcal{M}\subset \mathds{N}^*$ is a finite nonempty set of integers.
572 The basic design procedure of the New CI generator is summarized in Algorithm~\ref{Chaotic iteration1}.
573 The internal state is $x$, the output state is $r$. $a$ and $b$ are those computed by the two input
574 PRNGs. Lastly, the value $g(a)$ is an integer defined as in Eq.~\ref{Formula}.
575 This function is required to make the outputs uniform in $\llbracket 0, 2^\mathsf{N}-1 \rrbracket$
576 (the reader is referred to~\cite{bg10:ip} for more information).
583 0 \text{ if }0 \leqslant{y^n}<{C^0_{32}},\\
584 1 \text{ if }{C^0_{32}} \leqslant{y^n}<\sum_{i=0}^1{C^i_{32}},\\
585 2 \text{ if }\sum_{i=0}^1{C^i_{32}} \leqslant{y^n}<\sum_{i=0}^2{C^i_{32}},\\
586 \vdots~~~~~ ~~\vdots~~~ ~~~~\\
587 N \text{ if }\sum_{i=0}^{N-1}{C^i_{32}}\leqslant{y^n}<1.\\
593 \textbf{Input:} the internal state $x$ (32 bits)\\
594 \textbf{Output:} a state $r$ of 32 bits
595 \begin{algorithmic}[1]
598 \STATE$d_i\leftarrow{0}$\;
601 \STATE$a\leftarrow{PRNG_1()}$\;
602 \STATE$m\leftarrow{g(a)}$\;
603 \STATE$k\leftarrow{m}$\;
604 \WHILE{$i=0,\dots,k$}
606 \STATE$b\leftarrow{PRNG_2()~mod~\mathsf{N}}$\;
607 \STATE$S\leftarrow{b}$\;
610 \STATE $x_S\leftarrow{ \overline{x_S}}$\;
611 \STATE $d_S\leftarrow{1}$\;
616 \STATE $k\leftarrow{ k+1}$\;
619 \STATE $r\leftarrow{x}$\;
622 \caption{An arbitrary round of the new CI generator}
623 \label{Chaotic iteration1}
628 \subsection{Improving the Speed of the Former Generator}
630 Instead of updating only one cell at each iteration,\begin{color}{red} we now propose to choose a
631 subset of components and to update them together, for speed improvements. Such a proposition leads\end{color}
632 to a kind of merger of the two sequences used in Algorithms
633 \ref{CI Algorithm} and \ref{Chaotic iteration1}. When the updating function is the vectorial negation,
634 this algorithm can be rewritten as follows:
639 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
640 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus S^n,
643 \label{equation Oplus0}
645 where $\oplus$ is for the bitwise exclusive or between two integers.
646 This rewriting can be understood as follows. The $n-$th term $S^n$ of the
647 sequence $S$, which is an integer of $\mathsf{N}$ binary digits, presents
648 the list of cells to update in the state $x^n$ of the system (represented
649 as an integer having $\mathsf{N}$ bits too). More precisely, the $k-$th
650 component of this state (a binary digit) changes if and only if the $k-$th
651 digit in the binary decomposition of $S^n$ is 1.
653 The single basic component presented in Eq.~\ref{equation Oplus0} is of
654 ordinary use as a good elementary brick in various PRNGs. It corresponds
655 to the following discrete dynamical system in chaotic iterations:
658 \forall n\in \mathds{N}^{\ast }, \forall i\in
659 \llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
661 x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
662 \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
666 where $f$ is the vectorial negation and $\forall n \in \mathds{N}$,
667 $\mathcal{S}^n \subset \llbracket 1, \mathsf{N} \rrbracket$ is such that
668 $k \in \mathcal{S}^n$ if and only if the $k-$th digit in the binary
669 decomposition of $S^n$ is 1. Such chaotic iterations are more general
670 than the ones presented in Definition \ref{Def:chaotic iterations} because, instead of updating only one term at each iteration,
671 we select a subset of components to change.
674 Obviously, replacing the previous CI PRNG Algorithms by
675 Equation~\ref{equation Oplus0}, which is possible when the iteration function is
676 the vectorial negation, leads to a speed improvement. However, proofs
677 of chaos obtained in~\cite{bg10:ij} have been established
678 only for chaotic iterations of the form presented in Definition
679 \ref{Def:chaotic iterations}. The question is now to determine whether the
680 use of more general chaotic iterations to generate pseudorandom numbers
681 faster, does not deflate their topological chaos properties.
683 \subsection{Proofs of Chaos of the General Formulation of the Chaotic Iterations}
685 Let us consider the discrete dynamical systems in chaotic iterations having
686 the general form: $\forall n\in \mathds{N}^{\ast }$, $ \forall i\in
687 \llbracket1;\mathsf{N}\rrbracket $,
692 x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
693 \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
698 In other words, at the $n^{th}$ iteration, only the cells whose id is
699 contained into the set $S^{n}$ are iterated.
701 Let us now rewrite these general chaotic iterations as usual discrete dynamical
702 system of the form $X^{n+1}=f(X^n)$ on an ad hoc metric space. Such a formulation
703 is required in order to study the topological behavior of the system.
705 Let us introduce the following function:
708 \chi: & \llbracket 1; \mathsf{N} \rrbracket \times \mathcal{P}\left(\llbracket 1; \mathsf{N} \rrbracket\right) & \longrightarrow & \mathds{B}\\
709 & (i,X) & \longmapsto & \left\{ \begin{array}{ll} 0 & \textrm{if }i \notin X, \\ 1 & \textrm{if }i \in X, \end{array}\right.
712 where $\mathcal{P}\left(X\right)$ is for the powerset of the set $X$, that is, $Y \in \mathcal{P}\left(X\right) \Longleftrightarrow Y \subset X$.
714 Given a function $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, define the function:
715 $F_{f}: \mathcal{P}\left(\llbracket1;\mathsf{N}\rrbracket \right) \times \mathds{B}^{\mathsf{N}}
716 \longrightarrow \mathds{B}^{\mathsf{N}}$
719 (P,E) & \longmapsto & \left( E_{j}.\chi (j,P)+f(E)_{j}.\overline{\chi(j,P)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket}%
722 where + and . are the Boolean addition and product operations, and $\overline{x}$
723 is the negation of the Boolean $x$.
724 Consider the phase space:
726 \mathcal{X} = \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N} \times
727 \mathds{B}^\mathsf{N},
729 \noindent and the map defined on $\mathcal{X}$:
731 G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), %\label{Gf} %%RAPH, j'ai viré ce label qui existe déjà avant...
733 \noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
734 (S^{n})_{n\in \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow (S^{n+1})_{n\in
735 \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}$ and $i$ is the \emph{initial function}
736 $i:(S^{n})_{n\in \mathds{N}} \in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow S^{0}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)$.
737 Then the general chaotic iterations defined in Equation \ref{general CIs} can
738 be described by the following discrete dynamical system:
742 X^0 \in \mathcal{X} \\
748 Once more, a shift function appears as a component of these general chaotic
751 To study the Devaney's chaos property, a distance between two points
752 $X = (S,E), Y = (\check{S},\check{E})$ of $\mathcal{X}$ must be defined.
755 d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
758 \noindent where $ \displaystyle{d_{e}(E,\check{E})} = \displaystyle{\sum_{k=1}^{\mathsf{N}%
759 }\delta (E_{k},\check{E}_{k})}$ is once more the Hamming distance, and
760 $ \displaystyle{d_{s}(S,\check{S})} = \displaystyle{\dfrac{9}{\mathsf{N}}%
761 \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}$,
762 %%RAPH : ici, j'ai supprimé tous les sauts à la ligne
765 %% \begin{array}{lll}
766 %% \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
767 %% }\delta (E_{k},\check{E}_{k})} \textrm{ is once more the Hamming distance}, \\
768 %% \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
769 %% \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}.%
773 where $|X|$ is the cardinality of a set $X$ and $A\Delta B$ is for the symmetric difference, defined for sets A, B as
774 $A\,\Delta\,B = (A \setminus B) \cup (B \setminus A)$.
778 The function $d$ defined in Eq.~\ref{nouveau d} is a metric on $\mathcal{X}$.
782 $d_e$ is the Hamming distance. We will prove that $d_s$ is a distance
783 too, thus $d$, as being the sum of two distances, will also be a distance.
785 \item Obviously, $d_s(S,\check{S})\geqslant 0$, and if $S=\check{S}$, then
786 $d_s(S,\check{S})=0$. Conversely, if $d_s(S,\check{S})=0$, then
787 $\forall k \in \mathds{N}, |S^k\Delta {S}^k|=0$, and so $\forall k, S^k=\check{S}^k$.
788 \item $d_s$ is symmetric
789 ($d_s(S,\check{S})=d_s(\check{S},S)$) due to the commutative property
790 of the symmetric difference.
791 \item Finally, $|S \Delta S''| = |(S \Delta \varnothing) \Delta S''|= |S \Delta (S'\Delta S') \Delta S''|= |(S \Delta S') \Delta (S' \Delta S'')|\leqslant |S \Delta S'| + |S' \Delta S''|$,
792 and so for all subsets $S,S',$ and $S''$ of $\llbracket 1, \mathsf{N} \rrbracket$,
793 we have $d_s(S,S'') \leqslant d_e(S,S')+d_s(S',S'')$, and the triangle
794 inequality is obtained.
799 Before being able to study the topological behavior of the general
800 chaotic iterations, we must first establish that:
803 For all $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, the function $G_f$ is continuous on
804 $\left( \mathcal{X},d\right)$.
809 We use the sequential continuity.
810 Let $(S^n,E^n)_{n\in \mathds{N}}$ be a sequence of the phase space $%
811 \mathcal{X}$, which converges to $(S,E)$. We will prove that $\left(
812 G_{f}(S^n,E^n)\right) _{n\in \mathds{N}}$ converges to $\left(
813 G_{f}(S,E)\right) $. Let us remark that for all $n$, $S^n$ is a strategy,
814 thus, we consider a sequence of strategies (\emph{i.e.}, a sequence of
816 As $d((S^n,E^n);(S,E))$ converges to 0, each distance $d_{e}(E^n,E)$ and $d_{s}(S^n,S)$ converges
817 to 0. But $d_{e}(E^n,E)$ is an integer, so $\exists n_{0}\in \mathds{N},$ $%
818 d_{e}(E^n,E)=0$ for any $n\geqslant n_{0}$.\newline
819 In other words, there exists a threshold $n_{0}\in \mathds{N}$ after which no
820 cell will change its state:
821 $\exists n_{0}\in \mathds{N},n\geqslant n_{0}\Rightarrow E^n = E.$
823 In addition, $d_{s}(S^n,S)\longrightarrow 0,$ so $\exists n_{1}\in %
824 \mathds{N},d_{s}(S^n,S)<10^{-1}$ for all indexes greater than or equal to $%
825 n_{1}$. This means that for $n\geqslant n_{1}$, all the $S^n$ have the same
826 first term, which is $S^0$: $\forall n\geqslant n_{1},S_0^n=S_0.$
828 Thus, after the $max(n_{0},n_{1})^{th}$ term, states of $E^n$ and $E$ are
829 identical and strategies $S^n$ and $S$ start with the same first term.\newline
830 Consequently, states of $G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are equal,
831 so, after the $max(n_0, n_1)^{th}$ term, the distance $d$ between these two points is strictly less than 1.\newline
832 \noindent We now prove that the distance between $\left(
833 G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is convergent to
834 0. Let $\varepsilon >0$. \medskip
836 \item If $\varepsilon \geqslant 1$, we see that the distance
837 between $\left( G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is
838 strictly less than 1 after the $max(n_{0},n_{1})^{th}$ term (same state).
840 \item If $\varepsilon <1$, then $\exists k\in \mathds{N},10^{-k}\geqslant
841 \varepsilon > 10^{-(k+1)}$. But $d_{s}(S^n,S)$ converges to 0, so
843 \exists n_{2}\in \mathds{N},\forall n\geqslant
844 n_{2},d_{s}(S^n,S)<10^{-(k+2)},
846 thus after $n_{2}$, the $k+2$ first terms of $S^n$ and $S$ are equal.
848 \noindent As a consequence, the $k+1$ first entries of the strategies of $%
849 G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are the same ($G_{f}$ is a shift of strategies) and due to the definition of $d_{s}$, the floating part of
850 the distance between $(S^n,E^n)$ and $(S,E)$ is strictly less than $%
851 10^{-(k+1)}\leqslant \varepsilon $.
854 %%RAPH : ici j'ai rajouté une ligne
856 \forall \varepsilon >0,$ $\exists N_{0}=max(n_{0},n_{1},n_{2})\in \mathds{N}
857 ,$ $\forall n\geqslant N_{0},$
858 $ d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right)
859 \leqslant \varepsilon .
861 $G_{f}$ is consequently continuous.
865 It is now possible to study the topological behavior of the general chaotic
866 iterations. We will prove that,
869 \label{t:chaos des general}
870 The general chaotic iterations defined on Equation~\ref{general CIs} satisfy
871 the Devaney's property of chaos.
874 Let us firstly prove the following lemma.
876 \begin{lemma}[Strong transitivity]
878 For all couples $X,Y \in \mathcal{X}$ and any neighborhood $V$ of $X$, we can
879 find $n \in \mathds{N}^*$ and $X' \in V$ such that $G^n(X')=Y$.
883 Let $X=(S,E)$, $\varepsilon>0$, and $k_0 = \lfloor log_{10}(\varepsilon)+1 \rfloor$.
884 Any point $X'=(S',E')$ such that $E'=E$ and $\forall k \leqslant k_0, S'^k=S^k$,
885 are in the open ball $\mathcal{B}\left(X,\varepsilon\right)$. Let us define
886 $\check{X} = \left(\check{S},\check{E}\right)$, where $\check{X}= G^{k_0}(X)$.
887 We denote by $s\subset \llbracket 1; \mathsf{N} \rrbracket$ the set of coordinates
888 that are different between $\check{E}$ and the state of $Y$. Thus each point $X'$ of
889 the form $(S',E')$ where $E'=E$ and $S'$ starts with
890 $(S^0, S^1, \hdots, S^{k_0},s,\hdots)$, verifies the following properties:
892 \item $X'$ is in $\mathcal{B}\left(X,\varepsilon\right)$,
893 \item the state of $G_f^{k_0+1}(X')$ is the state of $Y$.
895 Finally the point $\left(\left(S^0, S^1, \hdots, S^{k_0},s,s^0, s^1, \hdots\right); E\right)$,
896 where $(s^0,s^1, \hdots)$ is the strategy of $Y$, satisfies the properties
897 claimed in the lemma.
900 We can now prove the Theorem~\ref{t:chaos des general}.
902 \begin{proof}[Theorem~\ref{t:chaos des general}]
903 Firstly, strong transitivity implies transitivity.
905 Let $(S,E) \in\mathcal{X}$ and $\varepsilon >0$. To
906 prove that $G_f$ is regular, it is sufficient to prove that
907 there exists a strategy $\tilde S$ such that the distance between
908 $(\tilde S,E)$ and $(S,E)$ is less than $\varepsilon$, and such that
909 $(\tilde S,E)$ is a periodic point.
911 Let $t_1=\lfloor-\log_{10}(\varepsilon)\rfloor$, and let $E'$ be the
912 configuration that we obtain from $(S,E)$ after $t_1$ iterations of
913 $G_f$. As $G_f$ is strongly transitive, there exists a strategy $S'$
914 and $t_2\in\mathds{N}$ such
915 that $E$ is reached from $(S',E')$ after $t_2$ iterations of $G_f$.
917 Consider the strategy $\tilde S$ that alternates the first $t_1$ terms
918 of $S$ and the first $t_2$ terms of $S'$:
919 %%RAPH : j'ai coupé la ligne en 2
921 S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,$$$$\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It
922 is clear that $(\tilde S,E)$ is obtained from $(\tilde S,E)$ after
923 $t_1+t_2$ iterations of $G_f$. So $(\tilde S,E)$ is a periodic
924 point. Since $\tilde S_t=S_t$ for $t<t_1$, by the choice of $t_1$, we
925 have $d((S,E),(\tilde S,E))<\epsilon$.
930 \section{Statistical Improvements Using Chaotic Iterations}
932 \label{The generation of pseudo-random sequence}
935 Let us now explain why we are reasonable grounds to believe that chaos
936 can improve statistical properties.
937 We will show in this section that, when mixing defective PRNGs with
938 chaotic iterations, the result presents better statistical properties
939 (this section summarizes the work of~\cite{bfg12a:ip}).
941 \subsection{Details of some Existing Generators}
943 The list of defective PRNGs we will use
944 as inputs for the statistical tests to come is introduced here.
946 Firstly, the simple linear congruency generator (LCGs) is defined by the following recurrence:
948 x^n = (ax^{n-1} + c)~mod~m
951 where $a$, $c$, and $x^0$ must be, among other things, non-negative and less than
952 $m$~\cite{LEcuyerS07}. In what follows, 2LCGs and 3LCGs refer as two (resp. three)
953 combinations of such LCGs. For further details, see~\cite{bfg12a:ip,combined_lcg}.
955 Secondly, the multiple recursive generators (MRGs) is based on a linear recurrence of order
956 $k$, modulo $m$~\cite{LEcuyerS07}:
958 x^n = (a^1x^{n-1}+~...~+a^kx^{n-k})~mod~m
961 Combination of two MRGs (referred as 2MRGs) is also be used in this paper.
963 Thirdly, generators based on linear recurrences with carry will be regarded too in experimentations.
964 This includes the add-with-carry (AWC) generator, based on the recurrence:
968 x^n = (x^{n-r} + x^{n-s} + c^{n-1})~mod~m, \\
969 c^n= (x^{n-r} + x^{n-s} + c^{n-1}) / m, \end{array}\end{equation}
970 the SWB generator, having the recurrence:
974 x^n = (x^{n-r} - x^{n-s} - c^{n-1})~mod~m, \\
977 1 ~~~~~\text{if}~ (x^{i-r} - x^{i-s} - c^{i-1})<0\\
978 0 ~~~~~\text{else},\end{array} \right. \end{array}\end{equation}
979 and the SWC generator designed by R. Couture, which is based on the following recurrence:
983 x^n = (a^1x^{n-1} \oplus ~...~ \oplus a^rx^{n-r} \oplus c^{n-1}) ~ mod ~ 2^w, \\
984 c^n = (a^1x^{n-1} \oplus ~...~ \oplus a^rx^{n-r} \oplus c^{n-1}) ~ / ~ 2^w. \end{array}\end{equation}
986 Then the generalized feedback shift register (GFSR) generator has been implemented, that is:
988 x^n = x^{n-r} \oplus x^{n-k}
993 Finally, the nonlinear inversive generator~\cite{LEcuyerS07} has been regarded too, which is:
1000 (a^1 + a^2 / z^{n-1})~mod~m & \text{if}~ z^{n-1} \neq 0 \\
1001 a^1 & \text{if}~ z^{n-1} = 0 .\end{array} \right. \end{array}\end{equation}
1007 \subsection{Statistical tests}
1008 \label{Security analysis}
1010 Considering the properties of binary random sequences, various statistical tests can be designed
1011 to evaluate the assertion that the sequence is generated by a perfectly random source. We have
1012 performed some statistical tests for the CIPRNGs proposed here. These tests include NIST
1013 suite~\cite{ANDREW2008} and DieHARD battery of tests~\cite{DieHARD}. For completeness and
1014 for reference, we give in the following subsection a brief description of each of the
1015 aforementioned tests.
1019 \subsubsection{NIST statistical tests suite}
1021 Among the numerous standard tests for pseudo-randomness, a convincing way to show the randomness of the produced sequences is to confront them to the NIST (National Institute of Standards and Technology) statistical tests, being an up-to-date tests suite proposed by the Information Technology Laboratory (ITL). A new version of the Statistical tests suite has been released in August 11, 2010.
1023 The NIST tests suite SP 800-22 is a statistical package consisting of 15 tests. They were developed to test the randomness of binary sequences produced by hardware or software based cryptographic pseudorandom number generators. These tests focus on a variety of different types of non-randomness that could exist in a sequence.
1025 For each statistical test, a set of $P-values$ (corresponding to the set of sequences) is produced.
1026 The interpretation of empirical results can be conducted in various ways.
1027 In this paper, the examination of the distribution of P-values to check for uniformity ($ P-value_{T}$) is used.
1028 The distribution of $P-values$ is examined to ensure uniformity.
1029 If $P-value_{T} \geqslant 0.0001$, then the sequences can be considered to be uniformly distributed.
1031 In our experiments, 100 sequences (s = 100), each with 1,000,000-bit long, are generated and tested. If the $P-value_{T}$ of any test is smaller than 0.0001, the sequences are considered to be not good enough and the generating algorithm is not suitable for usage.
1037 \subsubsection{DieHARD battery of tests}
1038 The DieHARD battery of tests has been the most sophisticated standard for over a decade. Because of the stringent requirements in the DieHARD tests suite, a generator passing this battery of
1039 tests can be considered good as a rule of thumb.
1041 The DieHARD battery of tests consists of 18 different independent statistical tests. This collection
1042 of tests is based on assessing the randomness of bits comprising 32-bit integers obtained from
1043 a random number generator. Each test requires $2^{23}$ 32-bit integers in order to run the full set
1044 of tests. Most of the tests in DieHARD return a $P-value$, which should be uniform on $[0,1)$ if the input file
1045 contains truly independent random bits. These $P-values$ are obtained by
1046 $P=F(X)$, where $F$ is the assumed distribution of the sample random variable $X$ (often normal).
1047 But that assumed $F$ is just an asymptotic approximation, for which the fit will be worst
1048 in the tails. Thus occasional $P-values$ near 0 or 1, such as 0.0012 or 0.9983, can occur.
1049 An individual test is considered to be failed if the $P-value$ approaches 1 closely, for example $P>0.9999$.
1052 \subsection{Results and discussion}
1053 \label{Results and discussion}
1055 \renewcommand{\arraystretch}{1.3}
1056 \caption{NIST and DieHARD tests suite passing rates for PRNGs without CI}
1057 \label{NIST and DieHARD tests suite passing rate the for PRNGs without CI}
1059 \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c|}
1061 Types of PRNGs & \multicolumn{2}{c|}{Linear PRNGs} & \multicolumn{4}{c|}{Lagged PRNGs} & \multicolumn{1}{c|}{ICG PRNGs} & \multicolumn{3}{c|}{Mixed PRNGs}\\ \hline
1062 \backslashbox{\textbf{$Tests$}} {\textbf{$PRNG$}} & LCG& MRG& AWC & SWB & SWC & GFSR & INV & LCG2& LCG3& MRG2 \\ \hline
1063 NIST & 11/15 & 14/15 &\textbf{15/15} & \textbf{15/15} & 14/15 & 14/15 & 14/15 & 14/15& 14/15& 14/15 \\ \hline
1064 DieHARD & 16/18 & 16/18 & 15/18 & 16/18 & \textbf{18/18} & 16/18 & 16/18 & 16/18& 16/18& 16/18\\ \hline
1068 Table~\ref{NIST and DieHARD tests suite passing rate the for PRNGs without CI} shows the results on the batteries recalled above, indicating that almost all the PRNGs cannot pass all their tests. In other words, the statistical quality of these PRNGs cannot fulfill the up-to-date standards presented previously. We will show that the CIPRNG can solve this issue.
1070 To illustrate the effects of this CIPRNG in detail, experiments will be divided in three parts:
1072 \item \textbf{Single CIPRNG}: The PRNGs involved in CI computing are of the same category.
1073 \item \textbf{Mixed CIPRNG}: Two different types of PRNGs are mixed during the chaotic iterations process.
1074 \item \textbf{Multiple CIPRNG}: The generator is obtained by repeating the composition of the iteration function as follows: $x^0\in \mathds{B}^{\mathsf{N}}$, and $\forall n\in \mathds{N}^{\ast },\forall i\in \llbracket1;\mathsf{N}\rrbracket,$
1079 x_i^{n-1}~~~~~\text{if}~S^n\neq i \\
1080 \forall j\in \llbracket1;\mathsf{m}\rrbracket,f^m(x^{n-1})_{S^{nm+j}}~\text{if}~S^{nm+j}=i.\end{array} \right. \end{array}
1082 $m$ is called the \emph{functional power}.
1086 We have performed statistical analysis of each of the aforementioned CIPRNGs.
1087 The results are reproduced in Tables~\ref{NIST and DieHARD tests suite passing rate the for PRNGs without CI} and \ref{NIST and DieHARD tests suite passing rate the for single CIPRNGs}.
1088 The scores written in boldface indicate that all the tests have been passed successfully, whereas an asterisk ``*'' means that the considered passing rate has been improved.
1090 \subsubsection{Tests based on the Single CIPRNG}
1093 \renewcommand{\arraystretch}{1.3}
1094 \caption{NIST and DieHARD tests suite passing rates for PRNGs with CI}
1095 \label{NIST and DieHARD tests suite passing rate the for single CIPRNGs}
1097 \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c|c|c|}
1099 Types of PRNGs & \multicolumn{2}{c|}{Linear PRNGs} & \multicolumn{4}{c|}{Lagged PRNGs} & \multicolumn{1}{c|}{ICG PRNGs} & \multicolumn{3}{c|}{Mixed PRNGs}\\ \hline
1100 \backslashbox{\textbf{$Tests$}} {\textbf{$Single~CIPRNG$}} & LCG & MRG & AWC & SWB & SWC & GFSR & INV& LCG2 & LCG3& MRG2 \\ \hline\hline
1101 Old CIPRNG\\ \hline \hline
1102 NIST & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} & \textbf{15/15} & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} *& \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} \\ \hline
1103 DieHARD & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} & \textbf{18/18} * & \textbf{18/18} *& \textbf{18/18} * & \textbf{18/18} *& \textbf{18/18} * \\ \hline
1104 New CIPRNG\\ \hline \hline
1105 NIST & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} & \textbf{15/15} & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} *& \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} \\ \hline
1106 DieHARD & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} *& \textbf{18/18} *\\ \hline
1107 Xor CIPRNG\\ \hline\hline
1108 NIST & 14/15*& \textbf{15/15} * & \textbf{15/15} & \textbf{15/15} & 14/15 & \textbf{15/15} * & 14/15& \textbf{15/15} * & \textbf{15/15} *& \textbf{15/15} \\ \hline
1109 DieHARD & 16/18 & 16/18 & 17/18* & \textbf{18/18} * & \textbf{18/18} & \textbf{18/18} * & 16/18 & 16/18 & 16/18& 16/18\\ \hline
1113 The statistical tests results of the PRNGs using the single CIPRNG method are given in Table~\ref{NIST and DieHARD tests suite passing rate the for single CIPRNGs}.
1114 We can observe that, except for the Xor CIPRNG, all of the CIPRNGs have passed the 15 tests of the NIST battery and the 18 tests of the DieHARD one.
1115 Moreover, considering these scores, we can deduce that both the single Old CIPRNG and the single New CIPRNG are relatively steadier than the single Xor CIPRNG approach, when applying them to different PRNGs.
1116 However, the Xor CIPRNG is obviously the fastest approach to generate a CI random sequence, and it still improves the statistical properties relative to each generator taken alone, although the test values are not as good as desired.
1118 Therefore, all of these three ways are interesting, for different reasons, in the production of pseudorandom numbers and,
1119 on the whole, the single CIPRNG method can be considered to adapt to or improve all kinds of PRNGs.
1121 To have a realization of the Xor CIPRNG that can pass all the tests embedded into the NIST battery, the Xor CIPRNG with multiple functional powers are investigated in Section~\ref{Tests based on Multiple CIPRNG}.
1124 \subsubsection{Tests based on the Mixed CIPRNG}
1126 To compare the previous approach with the CIPRNG design that uses a Mixed CIPRNG, we have taken into account the same inputted generators than in the previous section.
1127 These inputted couples $(PRNG_1,PRNG_2)$ of PRNGs are used in the Mixed approach as follows:
1131 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
1132 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus PRNG_1\oplus PRNG_2,
1135 \label{equation Oplus}
1138 With this Mixed CIPRNG approach, both the Old CIPRNG and New CIPRNG continue to pass all the NIST and DieHARD suites.
1139 In addition, we can see that the PRNGs using a Xor CIPRNG approach can pass more tests than previously.
1140 The main reason of this success is that the Mixed Xor CIPRNG has a longer period.
1141 Indeed, let $n_{P}$ be the period of a PRNG $P$, then the period deduced from the single Xor CIPRNG approach is obviously equal to:
1146 n_{P}&\text{if~}x^0=x^{n_{P}}\\
1147 2n_{P}&\text{if~}x^0\neq x^{n_{P}}.\\
1150 \label{equation Oplus}
1153 Let us now denote by $n_{P1}$ and $n_{P2}$ the periods of respectively the $PRNG_1$ and $PRNG_2$ generators, then the period of the Mixed Xor CIPRNG will be:
1158 LCM(n_{P1},n_{P2})&\text{if~}x^0=x^{LCM(n_{P1},n_{P2})}\\
1159 2LCM(n_{P1},n_{P2})&\text{if~}x^0\neq x^{LCM(n_{P1},n_{P2})}.\\
1162 \label{equation Oplus}
1165 In Table~\ref{DieHARD fail mixex CIPRNG}, we only show the results for the Mixed CIPRNGs that cannot pass all DieHARD suites (the NIST tests are all passed). It demonstrates that Mixed Xor CIPRNG involving LCG, MRG, LCG2, LCG3, MRG2, or INV cannot pass the two following tests, namely the ``Matrix Rank 32x32'' and the ``COUNT-THE-1's'' tests contained into the DieHARD battery. Let us recall their definitions:
1168 \item \textbf{Matrix Rank 32x32.} A random 32x32 binary matrix is formed, each row having a 32-bit random vector. Its rank is an integer that ranges from 0 to 32. Ranks less than 29 must be rare, and their occurences must be pooled with those of rank 29. To achieve the test, ranks of 40,000 such random matrices are obtained, and a chisquare test is performed on counts for ranks 32,31,30 and for ranks $\leq29$.
1170 \item \textbf{COUNT-THE-1's TEST} Consider the file under test as a stream of bytes (four per 2 bit integer). Each byte can contain from 0 to 8 1's, with probabilities 1,8,28,56,70,56,28,8,1 over 256. Now let the stream of bytes provide a string of overlapping 5-letter words, each ``letter'' taking values A,B,C,D,E. The letters are determined by the number of 1's in a byte: 0,1, or 2 yield A, 3 yields B, 4 yields C, 5 yields D and 6,7, or 8 yield E. Thus we have a monkey at a typewriter hitting five keys with various probabilities (37,56,70,56,37 over 256). There are $5^5$ possible 5-letter words, and from a string of 256,000 (over-lapping) 5-letter words, counts are made on the frequencies for each word. The quadratic form in the weak inverse of the covariance matrix of the cell counts provides a chisquare test: Q5-Q4, the difference of the naive Pearson sums of $(OBS-EXP)^2/EXP$ on counts for 5- and 4-letter cell counts.
1173 The reason of these fails is that the output of LCG, LCG2, LCG3, MRG, and MRG2 under the experiments are in 31-bit. Compare with the Single CIPRNG, using different PRNGs to build CIPRNG seems more efficient in improving random number quality (mixed Xor CI can 100\% pass NIST, but single cannot).
1176 \renewcommand{\arraystretch}{1.3}
1177 \caption{Scores of mixed Xor CIPRNGs when considering the DieHARD battery}
1178 \label{DieHARD fail mixex CIPRNG}
1180 \begin{tabular}{|l||c|c|c|c|c|c|}
1182 \backslashbox{\textbf{$PRNG_1$}} {\textbf{$PRNG_0$}} & LCG & MRG & INV & LCG2 & LCG3 & MRG2 \\ \hline\hline
1183 LCG &\backslashbox{} {} &16/18&16/18 &16/18 &16/18 &16/18\\ \hline
1184 MRG &16/18 &\backslashbox{} {} &16/18&16/18 &16/18 &16/18\\ \hline
1185 INV &16/18 &16/18&\backslashbox{} {} &16/18 &16/18&16/18 \\ \hline
1186 LCG2 &16/18 &16/18 &16/18 &\backslashbox{} {} &16/18&16/18\\ \hline
1187 LCG3 &16/18 &16/18 &16/18&16/18&\backslashbox{} {} &16/18\\ \hline
1188 MRG2 &16/18 &16/18 &16/18&16/18 &16/18 &\backslashbox{} {} \\ \hline
1192 \subsubsection{Tests based on the Multiple CIPRNG}
1193 \label{Tests based on Multiple CIPRNG}
1195 Until now, the combination of at most two input PRNGs has been investigated.
1196 We now regard the possibility to use a larger number of generators to improve the statistics of the generated pseudorandom numbers, leading to the multiple functional power approach.
1197 For the CIPRNGs which have already pass both the NIST and DieHARD suites with 2 inputted PRNGs (all the Old and New CIPRNGs, and some of the Xor CIPRNGs), it is not meaningful to consider their adaption of this multiple CIPRNG method, hence only the Multiple Xor CIPRNGs, having the following form, will be investigated.
1201 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
1202 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus S^{nm}\oplus S^{nm+1}\ldots \oplus S^{nm+m-1} ,
1205 \label{equation Oplus}
1208 The question is now to determine the value of the threshold $m$ (the functional power) making the multiple CIPRNG being able to pass the whole NIST battery.
1209 Such a question is answered in Table~\ref{threshold}.
1213 \renewcommand{\arraystretch}{1.3}
1214 \caption{Functional power $m$ making it possible to pass the whole NIST battery}
1217 \begin{tabular}{|l||c|c|c|c|c|c|c|c|}
1219 Inputted $PRNG$ & LCG & MRG & SWC & GFSR & INV& LCG2 & LCG3 & MRG2 \\ \hline\hline
1220 Threshold value $m$& 19 & 7 & 2& 1 & 11& 9& 3& 4\\ \hline\hline
1224 \subsubsection{Results Summary}
1226 We can summarize the obtained results as follows.
1228 \item The CIPRNG method is able to improve the statistical properties of a large variety of PRNGs.
1229 \item Using different PRNGs in the CIPRNG approach is better than considering several instances of one unique PRNG.
1230 \item The statistical quality of the outputs increases with the functional power $m$.
1235 \section{Efficient PRNG based on Chaotic Iterations}
1236 \label{sec:efficient PRNG}
1238 Based on the proof presented in the previous section, it is now possible to
1239 improve the speed of the generator formerly presented in~\cite{bgw09:ip,guyeux10}.
1240 The first idea is to consider
1241 that the provided strategy is a pseudorandom Boolean vector obtained by a
1243 An iteration of the system is simply the bitwise exclusive or between
1244 the last computed state and the current strategy.
1245 Topological properties of disorder exhibited by chaotic
1246 iterations can be inherited by the inputted generator, we hope by doing so to
1247 obtain some statistical improvements while preserving speed.
1249 %%RAPH : j'ai viré tout ca
1250 %% Let us give an example using 16-bits numbers, to clearly understand how the bitwise xor operations
1253 %% Suppose that $x$ and the strategy $S^i$ are given as
1255 %% Table~\ref{TableExemple} shows the result of $x \oplus S^i$.
1258 %% \begin{scriptsize}
1260 %% \begin{array}{|cc|cccccccccccccccc|}
1262 %% x &=&1&0&1&1&1&0&1&0&1&0&0&1&0&0&1&0\\
1264 %% S^i &=&0&1&1&0&0&1&1&0&1&1&1&0&0&1&1&1\\
1266 %% x \oplus S^i&=&1&1&0&1&1&1&0&0&0&1&1&1&0&1&0&1\\
1273 %% \caption{Example of an arbitrary round of the proposed generator}
1274 %% \label{TableExemple}
1280 \lstset{language=C,caption={C code of the sequential PRNG based on chaotic iterations},label=algo:seqCIPRNG}
1284 unsigned int CIPRNG() {
1285 static unsigned int x = 123123123;
1286 unsigned long t1 = xorshift();
1287 unsigned long t2 = xor128();
1288 unsigned long t3 = xorwow();
1289 x = x^(unsigned int)t1;
1290 x = x^(unsigned int)(t2>>32);
1291 x = x^(unsigned int)(t3>>32);
1292 x = x^(unsigned int)t2;
1293 x = x^(unsigned int)(t1>>32);
1294 x = x^(unsigned int)t3;
1302 In Listing~\ref{algo:seqCIPRNG} a sequential version of the proposed PRNG based
1303 on chaotic iterations is presented. The xor operator is represented by
1304 \textasciicircum. This function uses three classical 64-bits PRNGs, namely the
1305 \texttt{xorshift}, the \texttt{xor128}, and the
1306 \texttt{xorwow}~\cite{Marsaglia2003}. In the following, we call them ``xor-like
1307 PRNGs''. As each xor-like PRNG uses 64-bits whereas our proposed generator
1308 works with 32-bits, we use the command \texttt{(unsigned int)}, that selects the
1309 32 least significant bits of a given integer, and the code \texttt{(unsigned
1310 int)(t$>>$32)} in order to obtain the 32 most significant bits of \texttt{t}.
1312 Thus producing a pseudorandom number needs 6 xor operations with 6 32-bits numbers
1313 that are provided by 3 64-bits PRNGs. This version successfully passes the
1314 stringent BigCrush battery of tests~\cite{LEcuyerS07}.
1316 \section{Efficient PRNGs based on Chaotic Iterations on GPU}
1317 \label{sec:efficient PRNG gpu}
1319 In order to take benefits from the computing power of GPU, a program
1320 needs to have independent blocks of threads that can be computed
1321 simultaneously. In general, the larger the number of threads is, the
1322 more local memory is used, and the less branching instructions are
1323 used (if, while, ...), the better the performances on GPU is.
1324 Obviously, having these requirements in mind, it is possible to build
1325 a program similar to the one presented in Listing
1326 \ref{algo:seqCIPRNG}, which computes pseudorandom numbers on GPU. To
1327 do so, we must firstly recall that in the CUDA~\cite{Nvid10}
1328 environment, threads have a local identifier called
1329 \texttt{ThreadIdx}, which is relative to the block containing
1330 them. Furthermore, in CUDA, parts of the code that are executed by the GPU, are
1331 called {\it kernels}.
1334 \subsection{Naive Version for GPU}
1337 It is possible to deduce from the CPU version a quite similar version adapted to GPU.
1338 The simple principle consists in making each thread of the GPU computing the CPU version of our PRNG.
1339 Of course, the three xor-like
1340 PRNGs used in these computations must have different parameters.
1341 In a given thread, these parameters are
1342 randomly picked from another PRNGs.
1343 The initialization stage is performed by the CPU.
1344 To do it, the ISAAC PRNG~\cite{Jenkins96} is used to set all the
1345 parameters embedded into each thread.
1347 The implementation of the three
1348 xor-like PRNGs is straightforward when their parameters have been
1349 allocated in the GPU memory. Each xor-like works with an internal
1350 number $x$ that saves the last generated pseudorandom number. Additionally, the
1351 implementation of the xor128, the xorshift, and the xorwow respectively require
1352 4, 5, and 6 unsigned long as internal variables.
1357 \KwIn{InternalVarXorLikeArray: array with internal variables of the 3 xor-like
1358 PRNGs in global memory\;
1359 NumThreads: number of threads\;}
1360 \KwOut{NewNb: array containing random numbers in global memory}
1361 \If{threadIdx is concerned by the computation} {
1362 retrieve data from InternalVarXorLikeArray[threadIdx] in local variables\;
1364 compute a new PRNG as in Listing\ref{algo:seqCIPRNG}\;
1365 store the new PRNG in NewNb[NumThreads*threadIdx+i]\;
1367 store internal variables in InternalVarXorLikeArray[threadIdx]\;
1370 \caption{Main kernel of the GPU ``naive'' version of the PRNG based on chaotic iterations}
1371 \label{algo:gpu_kernel}
1376 Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of the proposed PRNG on
1377 GPU. Due to the available memory in the GPU and the number of threads
1378 used simultaneously, the number of random numbers that a thread can generate
1379 inside a kernel is limited (\emph{i.e.}, the variable \texttt{n} in
1380 algorithm~\ref{algo:gpu_kernel}). For instance, if $100,000$ threads are used and
1381 if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)},
1382 then the memory required to store all of the internals variables of both the xor-like
1383 PRNGs\footnote{we multiply this number by $2$ in order to count 32-bits numbers}
1384 and the pseudorandom numbers generated by our PRNG, is equal to $100,000\times ((4+5+6)\times
1385 2+(1+100))=1,310,000$ 32-bits numbers, that is, approximately $52$Mb.
1387 This generator is able to pass the whole BigCrush battery of tests, for all
1388 the versions that have been tested depending on their number of threads
1389 (called \texttt{NumThreads} in our algorithm, tested up to $5$ million).
1392 The proposed algorithm has the advantage of manipulating independent
1393 PRNGs, so this version is easily adaptable on a cluster of computers too. The only thing
1394 to ensure is to use a single ISAAC PRNG. To achieve this requirement, a simple solution consists in
1395 using a master node for the initialization. This master node computes the initial parameters
1396 for all the different nodes involved in the computation.
1399 \subsection{Improved Version for GPU}
1401 As GPU cards using CUDA have shared memory between threads of the same block, it
1402 is possible to use this feature in order to simplify the previous algorithm,
1403 i.e., to use less than 3 xor-like PRNGs. The solution consists in computing only
1404 one xor-like PRNG by thread, saving it into the shared memory, and then to use the results
1405 of some other threads in the same block of threads. In order to define which
1406 thread uses the result of which other one, we can use a combination array that
1407 contains the indexes of all threads and for which a combination has been
1410 In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used. The
1411 variable \texttt{offset} is computed using the value of
1412 \texttt{combination\_size}. Then we can compute \texttt{o1} and \texttt{o2}
1413 representing the indexes of the other threads whose results are used by the
1414 current one. In this algorithm, we consider that a 32-bits xor-like PRNG has
1415 been chosen. In practice, we use the xor128 proposed in~\cite{Marsaglia2003} in
1416 which unsigned longs (64 bits) have been replaced by unsigned integers (32
1419 This version can also pass the whole {\it BigCrush} battery of tests.
1423 \KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs
1425 NumThreads: Number of threads\;
1426 array\_comb1, array\_comb2: Arrays containing combinations of size combination\_size\;}
1428 \KwOut{NewNb: array containing random numbers in global memory}
1429 \If{threadId is concerned} {
1430 retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory and x\;
1431 offset = threadIdx\%combination\_size\;
1432 o1 = threadIdx-offset+array\_comb1[offset]\;
1433 o2 = threadIdx-offset+array\_comb2[offset]\;
1436 t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
1437 shared\_mem[threadId]=t\;
1438 x = x\textasciicircum t\;
1440 store the new PRNG in NewNb[NumThreads*threadId+i]\;
1442 store internal variables in InternalVarXorLikeArray[threadId]\;
1445 \caption{Main kernel for the chaotic iterations based PRNG GPU efficient
1447 \label{algo:gpu_kernel2}
1450 \subsection{Theoretical Evaluation of the Improved Version}
1452 A run of Algorithm~\ref{algo:gpu_kernel2} consists in an operation ($x=x\oplus t$) having
1453 the form of Equation~\ref{equation Oplus}, which is equivalent to the iterative
1454 system of Eq.~\ref{eq:generalIC}. That is, an iteration of the general chaotic
1455 iterations is realized between the last stored value $x$ of the thread and a strategy $t$
1456 (obtained by a bitwise exclusive or between a value provided by a xor-like() call
1457 and two values previously obtained by two other threads).
1458 To be certain that we are in the framework of Theorem~\ref{t:chaos des general},
1459 we must guarantee that this dynamical system iterates on the space
1460 $\mathcal{X} = \mathcal{P}\left(\llbracket 1, \mathsf{N} \rrbracket\right)^\mathds{N}\times\mathds{B}^\mathsf{N}$.
1461 The left term $x$ obviously belongs to $\mathds{B}^ \mathsf{N}$.
1462 To prevent from any flaws of chaotic properties, we must check that the right
1463 term (the last $t$), corresponding to the strategies, can possibly be equal to any
1464 integer of $\llbracket 1, \mathsf{N} \rrbracket$.
1466 Such a result is obvious, as for the xor-like(), all the
1467 integers belonging into its interval of definition can occur at each iteration, and thus the
1468 last $t$ respects the requirement. Furthermore, it is possible to
1469 prove by an immediate mathematical induction that, as the initial $x$
1470 is uniformly distributed (it is provided by a cryptographically secure PRNG),
1471 the two other stored values shmem[o1] and shmem[o2] are uniformly distributed too,
1472 (this is the induction hypothesis), and thus the next $x$ is finally uniformly distributed.
1474 Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general
1475 chaotic iterations presented previously, and for this reason, it satisfies the
1476 Devaney's formulation of a chaotic behavior.
1478 \section{Experiments}
1479 \label{sec:experiments}
1481 Different experiments have been performed in order to measure the generation
1482 speed. We have used a first computer equipped with a Tesla C1060 NVidia GPU card
1484 Intel Xeon E5530 cadenced at 2.40 GHz, and
1485 a second computer equipped with a smaller CPU and a GeForce GTX 280.
1487 cards have 240 cores.
1489 In Figure~\ref{fig:time_xorlike_gpu} we compare the quantity of pseudorandom numbers
1490 generated per second with various xor-like based PRNGs. In this figure, the optimized
1491 versions use the {\it xor64} described in~\cite{Marsaglia2003}, whereas the naive versions
1492 embed the three xor-like PRNGs described in Listing~\ref{algo:seqCIPRNG}. In
1493 order to obtain the optimal performances, the storage of pseudorandom numbers
1494 into the GPU memory has been removed. This step is time consuming and slows down the numbers
1495 generation. Moreover this storage is completely
1496 useless, in case of applications that consume the pseudorandom
1497 numbers directly after generation. We can see that when the number of threads is greater
1498 than approximately 30,000 and lower than 5 million, the number of pseudorandom numbers generated
1499 per second is almost constant. With the naive version, this value ranges from 2.5 to
1500 3GSamples/s. With the optimized version, it is approximately equal to
1501 20GSamples/s. Finally we can remark that both GPU cards are quite similar, but in
1502 practice, the Tesla C1060 has more memory than the GTX 280, and this memory
1503 should be of better quality.
1504 As a comparison, Listing~\ref{algo:seqCIPRNG} leads to the generation of about
1505 138MSample/s when using one core of the Xeon E5530.
1507 \begin{figure}[htbp]
1509 \includegraphics[width=\columnwidth]{curve_time_xorlike_gpu.pdf}
1511 \caption{Quantity of pseudorandom numbers generated per second with the xorlike-based PRNG}
1512 \label{fig:time_xorlike_gpu}
1519 In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized
1520 BBS-based PRNG on GPU. On the Tesla C1060 we obtain approximately 700MSample/s
1521 and on the GTX 280 about 670MSample/s, which is obviously slower than the
1522 xorlike-based PRNG on GPU. However, we will show in the next sections that this
1523 new PRNG has a strong level of security, which is necessarily paid by a speed
1526 \begin{figure}[htbp]
1528 \includegraphics[width=\columnwidth]{curve_time_bbs_gpu.pdf}
1530 \caption{Quantity of pseudorandom numbers generated per second using the BBS-based PRNG}
1531 \label{fig:time_bbs_gpu}
1534 All these experiments allow us to conclude that it is possible to
1535 generate a very large quantity of pseudorandom numbers statistically perfect with the xor-like version.
1536 To a certain extend, it is also the case with the secure BBS-based version, the speed deflation being
1537 explained by the fact that the former version has ``only''
1538 chaotic properties and statistical perfection, whereas the latter is also cryptographically secure,
1539 as it is shown in the next sections.
1547 \section{Security Analysis}
1548 \label{sec:security analysis}
1552 In this section the concatenation of two strings $u$ and $v$ is classically
1554 In a cryptographic context, a pseudorandom generator is a deterministic
1555 algorithm $G$ transforming strings into strings and such that, for any
1556 seed $s$ of length $m$, $G(s)$ (the output of $G$ on the input $s$) has size
1557 $\ell_G(m)$ with $\ell_G(m)>m$.
1558 The notion of {\it secure} PRNGs can now be defined as follows.
1561 A cryptographic PRNG $G$ is secure if for any probabilistic polynomial time
1562 algorithm $D$, for any positive polynomial $p$, and for all sufficiently
1564 $$| \mathrm{Pr}[D(G(U_m))=1]-Pr[D(U_{\ell_G(m)})=1]|< \frac{1}{p(m)},$$
1565 where $U_r$ is the uniform distribution over $\{0,1\}^r$ and the
1566 probabilities are taken over $U_m$, $U_{\ell_G(m)}$ as well as over the
1567 internal coin tosses of $D$.
1570 Intuitively, it means that there is no polynomial time algorithm that can
1571 distinguish a perfect uniform random generator from $G$ with a non
1572 negligible probability. The interested reader is referred
1573 to~\cite[chapter~3]{Goldreich} for more information. Note that it is
1574 quite easily possible to change the function $\ell$ into any polynomial
1575 function $\ell^\prime$ satisfying $\ell^\prime(m)>m)$~\cite[Chapter 3.3]{Goldreich}.
1577 The generation schema developed in (\ref{equation Oplus}) is based on a
1578 pseudorandom generator. Let $H$ be a cryptographic PRNG. We may assume,
1579 without loss of generality, that for any string $S_0$ of size $N$, the size
1580 of $H(S_0)$ is $kN$, with $k>2$. It means that $\ell_H(N)=kN$.
1581 Let $S_1,\ldots,S_k$ be the
1582 strings of length $N$ such that $H(S_0)=S_1 \ldots S_k$ ($H(S_0)$ is the concatenation of
1583 the $S_i$'s). The cryptographic PRNG $X$ defined in (\ref{equation Oplus})
1584 is the algorithm mapping any string of length $2N$ $x_0S_0$ into the string
1585 $(x_0\oplus S_0 \oplus S_1)(x_0\oplus S_0 \oplus S_1\oplus S_2)\ldots
1586 (x_o\bigoplus_{i=0}^{i=k}S_i)$. One in particular has $\ell_{X}(2N)=kN=\ell_H(N)$.
1587 We claim now that if this PRNG is secure,
1588 then the new one is secure too.
1591 \label{cryptopreuve}
1592 If $H$ is a secure cryptographic PRNG, then $X$ is a secure cryptographic
1597 The proposition is proved by contraposition. Assume that $X$ is not
1598 secure. By Definition, there exists a polynomial time probabilistic
1599 algorithm $D$, a positive polynomial $p$, such that for all $k_0$ there exists
1600 $N\geq \frac{k_0}{2}$ satisfying
1601 $$| \mathrm{Pr}[D(X(U_{2N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)}.$$
1602 We describe a new probabilistic algorithm $D^\prime$ on an input $w$ of size
1605 \item Decompose $w$ into $w=w_1\ldots w_{k}$, where each $w_i$ has size $N$.
1606 \item Pick a string $y$ of size $N$ uniformly at random.
1607 \item Compute $z=(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y
1608 \bigoplus_{i=1}^{i=k} w_i).$
1609 \item Return $D(z)$.
1613 Consider for each $y\in \mathbb{B}^{kN}$ the function $\varphi_{y}$
1614 from $\mathbb{B}^{kN}$ into $\mathbb{B}^{kN}$ mapping $w=w_1\ldots w_k$
1615 (each $w_i$ has length $N$) to
1616 $(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y
1617 \bigoplus_{i=1}^{i=k_1} w_i).$ By construction, one has for every $w$,
1618 \begin{equation}\label{PCH-1}
1619 D^\prime(w)=D(\varphi_y(w)),
1621 where $y$ is randomly generated.
1622 Moreover, for each $y$, $\varphi_{y}$ is injective: if
1623 $(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y\bigoplus_{i=1}^{i=k_1}
1624 w_i)=(y\oplus w_1^\prime)(y\oplus w_1^\prime\oplus w_2^\prime)\ldots
1625 (y\bigoplus_{i=1}^{i=k} w_i^\prime)$, then for every $1\leq j\leq k$,
1626 $y\bigoplus_{i=1}^{i=j} w_i^\prime=y\bigoplus_{i=1}^{i=j} w_i$. It follows,
1627 by a direct induction, that $w_i=w_i^\prime$. Furthermore, since $\mathbb{B}^{kN}$
1628 is finite, each $\varphi_y$ is bijective. Therefore, and using (\ref{PCH-1}),
1630 $\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(\varphi_y(U_{kN}))=1]$ and,
1632 \begin{equation}\label{PCH-2}
1633 \mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(U_{kN})=1].
1636 Now, using (\ref{PCH-1}) again, one has for every $x$,
1637 \begin{equation}\label{PCH-3}
1638 D^\prime(H(x))=D(\varphi_y(H(x))),
1640 where $y$ is randomly generated. By construction, $\varphi_y(H(x))=X(yx)$,
1642 \begin{equation}%\label{PCH-3} %%RAPH : j'ai viré ce label qui existe déjà, il est 3 ligne avant
1643 D^\prime(H(x))=D(yx),
1645 where $y$ is randomly generated.
1648 \begin{equation}\label{PCH-4}
1649 \mathrm{Pr}[D^\prime(H(U_{N}))=1]=\mathrm{Pr}[D(U_{2N})=1].
1651 From (\ref{PCH-2}) and (\ref{PCH-4}), one can deduce that
1652 there exists a polynomial time probabilistic
1653 algorithm $D^\prime$, a positive polynomial $p$, such that for all $k_0$ there exists
1654 $N\geq \frac{k_0}{2}$ satisfying
1655 $$| \mathrm{Pr}[D(H(U_{N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)},$$
1656 proving that $H$ is not secure, which is a contradiction.
1660 \section{Cryptographical Applications}
1662 \subsection{A Cryptographically Secure PRNG for GPU}
1665 It is possible to build a cryptographically secure PRNG based on the previous
1666 algorithm (Algorithm~\ref{algo:gpu_kernel2}). Due to Proposition~\ref{cryptopreuve},
1667 it simply consists in replacing
1668 the {\it xor-like} PRNG by a cryptographically secure one.
1669 We have chosen the Blum Blum Shub generator~\cite{BBS} (usually denoted by BBS) having the form:
1670 $$x_{n+1}=x_n^2~ mod~ M$$ where $M$ is the product of two prime numbers (these
1671 prime numbers need to be congruent to 3 modulus 4). BBS is known to be
1672 very slow and only usable for cryptographic applications.
1675 The modulus operation is the most time consuming operation for current
1676 GPU cards. So in order to obtain quite reasonable performances, it is
1677 required to use only modulus on 32-bits integer numbers. Consequently
1678 $x_n^2$ need to be lesser than $2^{32}$, and thus the number $M$ must be
1679 lesser than $2^{16}$. So in practice we can choose prime numbers around
1680 256 that are congruent to 3 modulus 4. With 32-bits numbers, only the
1681 4 least significant bits of $x_n$ can be chosen (the maximum number of
1682 indistinguishable bits is lesser than or equals to
1683 $log_2(log_2(M))$). In other words, to generate a 32-bits number, we need to use
1684 8 times the BBS algorithm with possibly different combinations of $M$. This
1685 approach is not sufficient to be able to pass all the tests of TestU01,
1686 as small values of $M$ for the BBS lead to
1687 small periods. So, in order to add randomness we have proceeded with
1688 the followings modifications.
1691 Firstly, we define 16 arrangement arrays instead of 2 (as described in
1692 Algorithm \ref{algo:gpu_kernel2}), but only 2 of them are used at each call of
1693 the PRNG kernels. In practice, the selection of combination
1694 arrays to be used is different for all the threads. It is determined
1695 by using the three last bits of two internal variables used by BBS.
1696 %This approach adds more randomness.
1697 In Algorithm~\ref{algo:bbs_gpu},
1698 character \& is for the bitwise AND. Thus using \&7 with a number
1699 gives the last 3 bits, thus providing a number between 0 and 7.
1701 Secondly, after the generation of the 8 BBS numbers for each thread, we
1702 have a 32-bits number whose period is possibly quite small. So
1703 to add randomness, we generate 4 more BBS numbers to
1704 shift the 32-bits numbers, and add up to 6 new bits. This improvement is
1705 described in Algorithm~\ref{algo:bbs_gpu}. In practice, the last 2 bits
1706 of the first new BBS number are used to make a left shift of at most
1707 3 bits. The last 3 bits of the second new BBS number are added to the
1708 strategy whatever the value of the first left shift. The third and the
1709 fourth new BBS numbers are used similarly to apply a new left shift
1712 Finally, as we use 8 BBS numbers for each thread, the storage of these
1713 numbers at the end of the kernel is performed using a rotation. So,
1714 internal variable for BBS number 1 is stored in place 2, internal
1715 variable for BBS number 2 is stored in place 3, ..., and finally, internal
1716 variable for BBS number 8 is stored in place 1.
1721 \KwIn{InternalVarBBSArray: array with internal variables of the 8 BBS
1723 NumThreads: Number of threads\;
1724 array\_comb: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\;
1725 array\_shift[4]=\{0,1,3,7\}\;
1728 \KwOut{NewNb: array containing random numbers in global memory}
1729 \If{threadId is concerned} {
1730 retrieve data from InternalVarBBSArray[threadId] in local variables including shared memory and x\;
1731 we consider that bbs1 ... bbs8 represent the internal states of the 8 BBS numbers\;
1732 offset = threadIdx\%combination\_size\;
1733 o1 = threadIdx-offset+array\_comb[bbs1\&7][offset]\;
1734 o2 = threadIdx-offset+array\_comb[8+bbs2\&7][offset]\;
1741 \tcp{two new shifts}
1742 shift=BBS3(bbs3)\&3\;
1744 t|=BBS1(bbs1)\&array\_shift[shift]\;
1745 shift=BBS7(bbs7)\&3\;
1747 t|=BBS2(bbs2)\&array\_shift[shift]\;
1748 t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
1749 shared\_mem[threadId]=t\;
1750 x = x\textasciicircum t\;
1752 store the new PRNG in NewNb[NumThreads*threadId+i]\;
1754 store internal variables in InternalVarXorLikeArray[threadId] using a rotation\;
1757 \caption{main kernel for the BBS based PRNG GPU}
1758 \label{algo:bbs_gpu}
1761 In Algorithm~\ref{algo:bbs_gpu}, $n$ is for the quantity of random numbers that
1762 a thread has to generate. The operation t<<=4 performs a left shift of 4 bits
1763 on the variable $t$ and stores the result in $t$, and $BBS1(bbs1)\&15$ selects
1764 the last four bits of the result of $BBS1$. Thus an operation of the form
1765 $t<<=4; t|=BBS1(bbs1)\&15\;$ realizes in $t$ a left shift of 4 bits, and then
1766 puts the 4 last bits of $BBS1(bbs1)$ in the four last positions of $t$. Let us
1767 remark that the initialization $t$ is not a necessity as we fill it 4 bits by 4
1768 bits, until having obtained 32-bits. The two last new shifts are realized in
1769 order to enlarge the small periods of the BBS used here, to introduce a kind of
1770 variability. In these operations, we make twice a left shift of $t$ of \emph{at
1771 most} 3 bits, represented by \texttt{shift} in the algorithm, and we put
1772 \emph{exactly} the \texttt{shift} last bits from a BBS into the \texttt{shift}
1773 last bits of $t$. For this, an array named \texttt{array\_shift}, containing the
1774 correspondence between the shift and the number obtained with \texttt{shift} 1
1775 to make the \texttt{and} operation is used. For example, with a left shift of 0,
1776 we make an and operation with 0, with a left shift of 3, we make an and
1777 operation with 7 (represented by 111 in binary mode).
1779 It should be noticed that this generator has once more the form $x^{n+1} = x^n \oplus S^n$,
1780 where $S^n$ is referred in this algorithm as $t$: each iteration of this
1781 PRNG ends with $x = x \wedge t$. This $S^n$ is only constituted
1782 by secure bits produced by the BBS generator, and thus, due to
1783 Proposition~\ref{cryptopreuve}, the resulted PRNG is cryptographically
1789 \subsection{Practical Security Evaluation}
1791 Suppose now that the PRNG will work during
1792 $M=100$ time units, and that during this period,
1793 an attacker can realize $10^{12}$ clock cycles.
1794 We thus wonder whether, during the PRNG's
1795 lifetime, the attacker can distinguish this
1796 sequence from truly random one, with a probability
1797 greater than $\varepsilon = 0.2$.
1798 We consider that $N$ has 900 bits.
1800 The random process is the BBS generator, which
1801 is cryptographically secure. More precisely, it
1802 is $(T,\varepsilon)-$secure: no
1803 $(T,\varepsilon)-$distinguishing attack can be
1804 successfully realized on this PRNG, if~\cite{Fischlin}
1806 T \leqslant \dfrac{L(N)}{6 N (log_2(N))\varepsilon^{-2}M^2}-2^7 N \varepsilon^{-2} M^2 log_2 (8 N \varepsilon^{-1}M)
1808 where $M$ is the length of the output ($M=100$ in
1809 our example), and $L(N)$ is equal to
1811 2.8\times 10^{-3} exp \left(1.9229 \times (N ~ln(2)^\frac{1}{3}) \times ln(N~ln 2)^\frac{2}{3}\right)
1813 is the number of clock cycles to factor a $N-$bit
1816 A direct numerical application shows that this attacker
1817 cannot achieve its $(10^{12},0.2)$ distinguishing
1818 attack in that context.
1822 \subsection{Toward a Cryptographically Secure and Chaotic Asymmetric Cryptosystem}
1823 \label{Blum-Goldwasser}
1824 We finish this research work by giving some thoughts about the use of
1825 the proposed PRNG in an asymmetric cryptosystem.
1826 This first approach will be further investigated in a future work.
1828 \subsubsection{Recalls of the Blum-Goldwasser Probabilistic Cryptosystem}
1830 The Blum-Goldwasser cryptosystem is a cryptographically secure asymmetric key encryption algorithm
1831 proposed in 1984~\cite{Blum:1985:EPP:19478.19501}. The encryption algorithm
1832 implements a XOR-based stream cipher using the BBS PRNG, in order to generate
1833 the keystream. Decryption is done by obtaining the initial seed thanks to
1834 the final state of the BBS generator and the secret key, thus leading to the
1835 reconstruction of the keystream.
1837 The key generation consists in generating two prime numbers $(p,q)$,
1838 randomly and independently of each other, that are
1839 congruent to 3 mod 4, and to compute the modulus $N=pq$.
1840 The public key is $N$, whereas the secret key is the factorization $(p,q)$.
1843 Suppose Bob wishes to send a string $m=(m_0, \dots, m_{L-1})$ of $L$ bits to Alice:
1845 \item Bob picks an integer $r$ randomly in the interval $\llbracket 1,N\rrbracket$ and computes $x_0 = r^2~mod~N$.
1846 \item He uses the BBS to generate the keystream of $L$ pseudorandom bits $(b_0, \dots, b_{L-1})$, as follows. For $i=0$ to $L-1$,
1849 \item While $i \leqslant L-1$:
1851 \item Set $b_i$ equal to the least-significant\footnote{As signaled previously, BBS can securely output up to $\mathsf{N} = \lfloor log(log(N)) \rfloor$ of the least-significant bits of $x_i$ during each round.} bit of $x_i$,
1853 \item $x_i = (x_{i-1})^2~mod~N.$
1856 \item The ciphertext is computed by XORing the plaintext bits $m$ with the keystream: $ c = (c_0, \dots, c_{L-1}) = m \oplus b$. This ciphertext is $[c, y]$, where $y=x_{0}^{2^{L}}~mod~N.$
1860 When Alice receives $\left[(c_0, \dots, c_{L-1}), y\right]$, she can recover $m$ as follows:
1862 \item Using the secret key $(p,q)$, she computes $r_p = y^{((p+1)/4)^{L}}~mod~p$ and $r_q = y^{((q+1)/4)^{L}}~mod~q$.
1863 \item The initial seed can be obtained using the following procedure: $x_0=q(q^{-1}~{mod}~p)r_p + p(p^{-1}~{mod}~q)r_q~{mod}~N$.
1864 \item She recomputes the bit-vector $b$ by using BBS and $x_0$.
1865 \item Alice finally computes the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$.
1869 \subsubsection{Proposal of a new Asymmetric Cryptosystem Adapted from Blum-Goldwasser}
1871 We propose to adapt the Blum-Goldwasser protocol as follows.
1872 Let $\mathsf{N} = \lfloor log(log(N)) \rfloor$ be the number of bits that can
1873 be obtained securely with the BBS generator using the public key $N$ of Alice.
1874 Alice will pick randomly $S^0$ in $\llbracket 0, 2^{\mathsf{N}-1}\rrbracket$ too, and
1875 her new public key will be $(S^0, N)$.
1877 To encrypt his message, Bob will compute
1878 %%RAPH : ici, j'ai mis un simple $
1880 $c = \left(m_0 \oplus (b_0 \oplus S^0), m_1 \oplus (b_0 \oplus b_1 \oplus S^0), \hdots, \right.$
1881 $ \left. m_{L-1} \oplus (b_0 \oplus b_1 \hdots \oplus b_{L-1} \oplus S^0) \right)$
1883 instead of $\left(m_0 \oplus b_0, m_1 \oplus b_1, \hdots, m_{L-1} \oplus b_{L-1} \right)$.
1885 The same decryption stage as in Blum-Goldwasser leads to the sequence
1886 $\left(m_0 \oplus S^0, m_1 \oplus S^0, \hdots, m_{L-1} \oplus S^0 \right)$.
1887 Thus, with a simple use of $S^0$, Alice can obtain the plaintext.
1888 By doing so, the proposed generator is used in place of BBS, leading to
1889 the inheritance of all the properties presented in this paper.
1891 \section{Conclusion}
1894 In this paper, a formerly proposed PRNG based on chaotic iterations
1895 has been generalized to improve its speed. It has been proven to be
1896 chaotic according to Devaney.
1897 Efficient implementations on GPU using xor-like PRNGs as input generators
1898 have shown that a very large quantity of pseudorandom numbers can be generated per second (about
1899 20Gsamples/s), and that these proposed PRNGs succeed to pass the hardest battery in TestU01,
1900 namely the BigCrush.
1901 Furthermore, we have shown that when the inputted generator is cryptographically
1902 secure, then it is the case too for the PRNG we propose, thus leading to
1903 the possibility to develop fast and secure PRNGs using the GPU architecture.
1904 \begin{color}{red} An improvement of the Blum-Goldwasser cryptosystem, making it
1905 behaves chaotically, has finally been proposed. \end{color}
1907 In future work we plan to extend this research, building a parallel PRNG for clusters or
1908 grid computing. Topological properties of the various proposed generators will be investigated,
1909 and the use of other categories of PRNGs as input will be studied too. The improvement
1910 of Blum-Goldwasser will be deepened. Finally, we
1911 will try to enlarge the quantity of pseudorandom numbers generated per second either
1912 in a simulation context or in a cryptographic one.
1916 \bibliographystyle{plain}
1917 \bibliography{mabase}