1 %\documentclass{article}
2 \documentclass[10pt,journal,letterpaper,compsoc]{IEEEtran}
3 \usepackage[utf8]{inputenc}
4 \usepackage[T1]{fontenc}
11 \usepackage[ruled,vlined]{algorithm2e}
13 \usepackage[standard]{ntheorem}
14 \usepackage{algorithmic}
17 % Pour mathds : les ensembles IR, IN, etc.
20 % Pour avoir des intervalles d'entiers
24 % Pour faire des sous-figures dans les figures
25 \usepackage{subfigure}
29 \newtheorem{notation}{Notation}
31 \newcommand{\X}{\mathcal{X}}
32 \newcommand{\Go}{G_{f_0}}
33 \newcommand{\B}{\mathds{B}}
34 \newcommand{\N}{\mathds{N}}
35 \newcommand{\BN}{\mathds{B}^\mathsf{N}}
38 \newcommand{\alert}[1]{\begin{color}{blue}\textit{#1}\end{color}}
40 \title{Efficient and Cryptographically Secure Generation of Chaotic Pseudorandom Numbers on GPU}
43 \author{Jacques M. Bahi, Rapha\"{e}l Couturier, Christophe
44 Guyeux, and Pierre-Cyrille Héam\thanks{Authors in alphabetic order}}
47 \IEEEcompsoctitleabstractindextext{
49 In this paper we present a new pseudorandom number generator (PRNG) on
50 graphics processing units (GPU). This PRNG is based on the so-called chaotic iterations. It
51 is firstly proven to be chaotic according to the Devaney's formulation. We thus propose an efficient
52 implementation for GPU that successfully passes the {\it BigCrush} tests, deemed to be the hardest
53 battery of tests in TestU01. Experiments show that this PRNG can generate
54 about 20 billion of random numbers per second on Tesla C1060 and NVidia GTX280
56 It is then established that, under reasonable assumptions, the proposed PRNG can be cryptographically
58 A chaotic version of the Blum-Goldwasser asymmetric key encryption scheme is finally proposed.
66 \IEEEdisplaynotcompsoctitleabstractindextext
67 \IEEEpeerreviewmaketitle
70 \section{Introduction}
72 Randomness is of importance in many fields such as scientific simulations or cryptography.
73 ``Random numbers'' can mainly be generated either by a deterministic and reproducible algorithm
74 called a pseudorandom number generator (PRNG), or by a physical non-deterministic
75 process having all the characteristics of a random noise, called a truly random number
77 In this paper, we focus on reproducible generators, useful for instance in
78 Monte-Carlo based simulators or in several cryptographic schemes.
79 These domains need PRNGs that are statistically irreproachable.
80 In some fields such as in numerical simulations, speed is a strong requirement
81 that is usually attained by using parallel architectures. In that case,
82 a recurrent problem is that a deflation of the statistical qualities is often
83 reported, when the parallelization of a good PRNG is realized.
84 This is why ad-hoc PRNGs for each possible architecture must be found to
85 achieve both speed and randomness.
86 On the other side, speed is not the main requirement in cryptography: the great
87 need is to define \emph{secure} generators able to withstand malicious
88 attacks. Roughly speaking, an attacker should not be able in practice to make
89 the distinction between numbers obtained with the secure generator and a true random
91 Finally, a small part of the community working in this domain focuses on a
92 third requirement, that is to define chaotic generators.
93 The main idea is to take benefits from a chaotic dynamical system to obtain a
94 generator that is unpredictable, disordered, sensible to its seed, or in other word chaotic.
95 Their desire is to map a given chaotic dynamics into a sequence that seems random
96 and unassailable due to chaos.
97 However, the chaotic maps used as a pattern are defined in the real line
98 whereas computers deal with finite precision numbers.
99 This distortion leads to a deflation of both chaotic properties and speed.
100 Furthermore, authors of such chaotic generators often claim their PRNG
101 as secure due to their chaos properties, but there is no obvious relation
102 between chaos and security as it is understood in cryptography.
103 This is why the use of chaos for PRNG still remains marginal and disputable.
105 The authors' opinion is that topological properties of disorder, as they are
106 properly defined in the mathematical theory of chaos, can reinforce the quality
107 of a PRNG. But they are not substitutable for security or statistical perfection.
108 Indeed, to the authors' mind, such properties can be useful in the two following situations. On the
109 one hand, a post-treatment based on a chaotic dynamical system can be applied
110 to a PRNG statistically deflective, in order to improve its statistical
111 properties. Such an improvement can be found, for instance, in~\cite{bgw09:ip,bcgr11:ip}.
112 On the other hand, chaos can be added to a fast, statistically perfect PRNG and/or a
113 cryptographically secure one, in case where chaos can be of interest,
114 \emph{only if these last properties are not lost during
115 the proposed post-treatment}. Such an assumption is behind this research work.
116 It leads to the attempts to define a
117 family of PRNGs that are chaotic while being fast and statistically perfect,
118 or cryptographically secure.
119 Let us finish this paragraph by noticing that, in this paper,
120 statistical perfection refers to the ability to pass the whole
121 {\it BigCrush} battery of tests, which is widely considered as the most
122 stringent statistical evaluation of a sequence claimed as random.
123 This battery can be found in the well-known TestU01 package~\cite{LEcuyerS07}.
124 Chaos, for its part, refers to the well-established definition of a
125 chaotic dynamical system proposed by Devaney~\cite{Devaney}.
128 In a previous work~\cite{bgw09:ip,guyeux10} we have proposed a post-treatment on PRNGs making them behave
129 as a chaotic dynamical system. Such a post-treatment leads to a new category of
130 PRNGs. We have shown that proofs of Devaney's chaos can be established for this
131 family, and that the sequence obtained after this post-treatment can pass the
132 NIST~\cite{Nist10}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} batteries of tests, even if the inputted generators
134 The proposition of this paper is to improve widely the speed of the formerly
135 proposed generator, without any lack of chaos or statistical properties.
136 In particular, a version of this PRNG on graphics processing units (GPU)
138 Although GPU was initially designed to accelerate
139 the manipulation of images, they are nowadays commonly used in many scientific
140 applications. Therefore, it is important to be able to generate pseudorandom
141 numbers inside a GPU when a scientific application runs in it. This remark
142 motivates our proposal of a chaotic and statistically perfect PRNG for GPU.
144 allows us to generate almost 20 billion of pseudorandom numbers per second.
145 Furthermore, we show that the proposed post-treatment preserves the
146 cryptographical security of the inputted PRNG, when this last has such a
148 Last, but not least, we propose a rewriting of the Blum-Goldwasser asymmetric
149 key encryption protocol by using the proposed method.
151 The remainder of this paper is organized as follows. In Section~\ref{section:related
152 works} we review some GPU implementations of PRNGs. Section~\ref{section:BASIC
153 RECALLS} gives some basic recalls on the well-known Devaney's formulation of chaos,
154 and on an iteration process called ``chaotic
155 iterations'' on which the post-treatment is based.
156 The proposed PRNG and its proof of chaos are given in Section~\ref{sec:pseudorandom}.
157 Section~\ref{sec:efficient PRNG} presents an efficient
158 implementation of this chaotic PRNG on a CPU, whereas Section~\ref{sec:efficient PRNG
159 gpu} describes and evaluates theoretically the GPU implementation.
160 Such generators are experimented in
161 Section~\ref{sec:experiments}.
162 We show in Section~\ref{sec:security analysis} that, if the inputted
163 generator is cryptographically secure, then it is the case too for the
164 generator provided by the post-treatment.
165 Such a proof leads to the proposition of a cryptographically secure and
166 chaotic generator on GPU based on the famous Blum Blum Shub
167 in Section~\ref{sec:CSGPU}, and to an improvement of the
168 Blum-Goldwasser protocol in Sect.~\ref{Blum-Goldwasser}.
169 This research work ends by a conclusion section, in which the contribution is
170 summarized and intended future work is presented.
175 \section{Related works on GPU based PRNGs}
176 \label{section:related works}
178 Numerous research works on defining GPU based PRNGs have already been proposed in the
179 literature, so that exhaustivity is impossible.
180 This is why authors of this document only give reference to the most significant attempts
181 in this domain, from their subjective point of view.
182 The quantity of pseudorandom numbers generated per second is mentioned here
183 only when the information is given in the related work.
184 A million numbers per second will be simply written as
185 1MSample/s whereas a billion numbers per second is 1GSample/s.
187 In \cite{Pang:2008:cec} a PRNG based on cellular automata is defined
188 with no requirement to an high precision integer arithmetic or to any bitwise
189 operations. Authors can generate about
190 3.2MSamples/s on a GeForce 7800 GTX GPU, which is quite an old card now.
191 However, there is neither a mention of statistical tests nor any proof of
192 chaos or cryptography in this document.
194 In \cite{ZRKB10}, the authors propose different versions of efficient GPU PRNGs
195 based on Lagged Fibonacci or Hybrid Taus. They have used these
196 PRNGs for Langevin simulations of biomolecules fully implemented on
197 GPU. Performances of the GPU versions are far better than those obtained with a
198 CPU, and these PRNGs succeed to pass the {\it BigCrush} battery of TestU01.
199 However the evaluations of the proposed PRNGs are only statistical ones.
202 Authors of~\cite{conf/fpga/ThomasHL09} have studied the implementation of some
203 PRNGs on different computing architectures: CPU, field-programmable gate array
204 (FPGA), massively parallel processors, and GPU. This study is of interest, because
205 the performance of the same PRNGs on different architectures are compared.
206 FPGA appears as the fastest and the most
207 efficient architecture, providing the fastest number of generated pseudorandom numbers
209 However, we notice that authors can ``only'' generate between 11 and 16GSamples/s
210 with a GTX 280 GPU, which should be compared with
211 the results presented in this document.
212 We can remark too that the PRNGs proposed in~\cite{conf/fpga/ThomasHL09} are only
213 able to pass the {\it Crush} battery, which is far easier than the {\it Big Crush} one.
215 Lastly, Cuda has developed a library for the generation of pseudorandom numbers called
216 Curand~\cite{curand11}. Several PRNGs are implemented, among
218 Xorwow~\cite{Marsaglia2003} and some variants of Sobol. The tests reported show that
219 their fastest version provides 15GSamples/s on the new Fermi C2050 card.
220 But their PRNGs cannot pass the whole TestU01 battery (only one test is failed).
223 We can finally remark that, to the best of our knowledge, no GPU implementation has been proven to be chaotic, and the cryptographically secure property has surprisingly never been considered.
225 \section{Basic Recalls}
226 \label{section:BASIC RECALLS}
228 This section is devoted to basic definitions and terminologies in the fields of
229 topological chaos and chaotic iterations. We assume the reader is familiar
230 with basic notions on topology (see for instance~\cite{Devaney}).
233 \subsection{Devaney's Chaotic Dynamical Systems}
235 In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and $V_{i}$
236 denotes the $i^{th}$ component of a vector $V$. $f^{k}=f\circ ...\circ f$
237 is for the $k^{th}$ composition of a function $f$. Finally, the following
238 notation is used: $\llbracket1;N\rrbracket=\{1,2,\hdots,N\}$.
241 Consider a topological space $(\mathcal{X},\tau)$ and a continuous function $f :
242 \mathcal{X} \rightarrow \mathcal{X}$.
245 The function $f$ is said to be \emph{topologically transitive} if, for any pair of open sets
246 $U,V \subset \mathcal{X}$, there exists $k>0$ such that $f^k(U) \cap V \neq
251 An element $x$ is a \emph{periodic point} for $f$ of period $n\in \mathds{N}^*$
252 if $f^{n}(x)=x$.% The set of periodic points of $f$ is denoted $Per(f).$
256 $f$ is said to be \emph{regular} on $(\mathcal{X}, \tau)$ if the set of periodic
257 points for $f$ is dense in $\mathcal{X}$: for any point $x$ in $\mathcal{X}$,
258 any neighborhood of $x$ contains at least one periodic point (without
259 necessarily the same period).
263 \begin{definition}[Devaney's formulation of chaos~\cite{Devaney}]
264 The function $f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and
265 topologically transitive.
268 The chaos property is strongly linked to the notion of ``sensitivity'', defined
269 on a metric space $(\mathcal{X},d)$ by:
272 \label{sensitivity} The function $f$ has \emph{sensitive dependence on initial conditions}
273 if there exists $\delta >0$ such that, for any $x\in \mathcal{X}$ and any
274 neighborhood $V$ of $x$, there exist $y\in V$ and $n > 0$ such that
275 $d\left(f^{n}(x), f^{n}(y)\right) >\delta $.
277 The constant $\delta$ is called the \emph{constant of sensitivity} of $f$.
280 Indeed, Banks \emph{et al.} have proven in~\cite{Banks92} that when $f$ is
281 chaotic and $(\mathcal{X}, d)$ is a metric space, then $f$ has the property of
282 sensitive dependence on initial conditions (this property was formerly an
283 element of the definition of chaos). To sum up, quoting Devaney
284 in~\cite{Devaney}, a chaotic dynamical system ``is unpredictable because of the
285 sensitive dependence on initial conditions. It cannot be broken down or
286 simplified into two subsystems which do not interact because of topological
287 transitivity. And in the midst of this random behavior, we nevertheless have an
288 element of regularity''. Fundamentally different behaviors are consequently
289 possible and occur in an unpredictable way.
293 \subsection{Chaotic Iterations}
294 \label{sec:chaotic iterations}
297 Let us consider a \emph{system} with a finite number $\mathsf{N} \in
298 \mathds{N}^*$ of elements (or \emph{cells}), so that each cell has a
299 Boolean \emph{state}. Having $\mathsf{N}$ Boolean values for these
300 cells leads to the definition of a particular \emph{state of the
301 system}. A sequence which elements belong to $\llbracket 1;\mathsf{N}
302 \rrbracket $ is called a \emph{strategy}. The set of all strategies is
303 denoted by $\llbracket 1, \mathsf{N} \rrbracket^\mathds{N}.$
306 \label{Def:chaotic iterations}
307 The set $\mathds{B}$ denoting $\{0,1\}$, let
308 $f:\mathds{B}^{\mathsf{N}}\longrightarrow \mathds{B}^{\mathsf{N}}$ be
309 a function and $S\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ be a ``strategy''. The so-called
310 \emph{chaotic iterations} are defined by $x^0\in
311 \mathds{B}^{\mathsf{N}}$ and
313 \forall n\in \mathds{N}^{\ast }, \forall i\in
314 \llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
316 x_i^{n-1} & \text{ if }S^n\neq i \\
317 \left(f(x^{n-1})\right)_{S^n} & \text{ if }S^n=i.
322 In other words, at the $n^{th}$ iteration, only the $S^{n}-$th cell is
323 \textquotedblleft iterated\textquotedblright . Note that in a more
324 general formulation, $S^n$ can be a subset of components and
325 $\left(f(x^{n-1})\right)_{S^{n}}$ can be replaced by
326 $\left(f(x^{k})\right)_{S^{n}}$, where $k<n$, describing for example,
327 delays transmission~\cite{Robert1986,guyeux10}. Finally, let us remark that
328 the term ``chaotic'', in the name of these iterations, has \emph{a
329 priori} no link with the mathematical theory of chaos, presented above.
332 Let us now recall how to define a suitable metric space where chaotic iterations
333 are continuous. For further explanations, see, e.g., \cite{guyeux10}.
335 Let $\delta $ be the \emph{discrete Boolean metric}, $\delta
336 (x,y)=0\Leftrightarrow x=y.$ Given a function $f$, define the function
337 $F_{f}: \llbracket1;\mathsf{N}\rrbracket\times \mathds{B}^{\mathsf{N}}
338 \longrightarrow \mathds{B}^{\mathsf{N}}$
341 & (k,E) & \longmapsto & \left( E_{j}.\delta (k,j)+ f(E)_{k}.\overline{\delta
342 (k,j)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket}%
345 \noindent where + and . are the Boolean addition and product operations.
346 Consider the phase space:
348 \mathcal{X} = \llbracket 1 ; \mathsf{N} \rrbracket^\mathds{N} \times
349 \mathds{B}^\mathsf{N},
351 \noindent and the map defined on $\mathcal{X}$:
353 G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), \label{Gf}
355 \noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
356 (S^{n})_{n\in \mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow (S^{n+1})_{n\in
357 \mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ and $i$ is the \emph{initial function}
358 $i:(S^{n})_{n\in \mathds{N}} \in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow S^{0}\in \llbracket
359 1;\mathsf{N}\rrbracket$. Then the chaotic iterations proposed in
360 Definition \ref{Def:chaotic iterations} can be described by the following iterations:
364 X^0 \in \mathcal{X} \\
370 With this formulation, a shift function appears as a component of chaotic
371 iterations. The shift function is a famous example of a chaotic
372 map~\cite{Devaney} but its presence is not sufficient enough to claim $G_f$ as
374 To study this claim, a new distance between two points $X = (S,E), Y =
375 (\check{S},\check{E})\in
376 \mathcal{X}$ has been introduced in \cite{guyeux10} as follows:
378 d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
384 \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
385 }\delta (E_{k},\check{E}_{k})}, \\
386 \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
387 \sum_{k=1}^{\infty }\dfrac{|S^k-\check{S}^k|}{10^{k}}}.%
393 This new distance has been introduced to satisfy the following requirements.
395 \item When the number of different cells between two systems is increasing, then
396 their distance should increase too.
397 \item In addition, if two systems present the same cells and their respective
398 strategies start with the same terms, then the distance between these two points
399 must be small because the evolution of the two systems will be the same for a
400 while. Indeed, both dynamical systems start with the same initial condition,
401 use the same update function, and as strategies are the same for a while, furthermore
402 updated components are the same as well.
404 The distance presented above follows these recommendations. Indeed, if the floor
405 value $\lfloor d(X,Y)\rfloor $ is equal to $n$, then the systems $E, \check{E}$
406 differ in $n$ cells ($d_e$ is indeed the Hamming distance). In addition, $d(X,Y) - \lfloor d(X,Y) \rfloor $ is a
407 measure of the differences between strategies $S$ and $\check{S}$. More
408 precisely, this floating part is less than $10^{-k}$ if and only if the first
409 $k$ terms of the two strategies are equal. Moreover, if the $k^{th}$ digit is
410 nonzero, then the $k^{th}$ terms of the two strategies are different.
411 The impact of this choice for a distance will be investigated at the end of the document.
413 Finally, it has been established in \cite{guyeux10} that,
416 Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. Then $G_{f}$ is continuous in
417 the metric space $(\mathcal{X},d)$.
420 The chaotic property of $G_f$ has been firstly established for the vectorial
421 Boolean negation $f_0(x_1,\hdots, x_\mathsf{N}) = (\overline{x_1},\hdots, \overline{x_\mathsf{N}})$ \cite{guyeux10}. To obtain a characterization, we have secondly
422 introduced the notion of asynchronous iteration graph recalled bellow.
424 Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. The
425 {\emph{asynchronous iteration graph}} associated with $f$ is the
426 directed graph $\Gamma(f)$ defined by: the set of vertices is
427 $\mathds{B}^\mathsf{N}$; for all $x\in\mathds{B}^\mathsf{N}$ and
428 $i\in \llbracket1;\mathsf{N}\rrbracket$,
429 the graph $\Gamma(f)$ contains an arc from $x$ to $F_f(i,x)$.
430 The relation between $\Gamma(f)$ and $G_f$ is clear: there exists a
431 path from $x$ to $x'$ in $\Gamma(f)$ if and only if there exists a
432 strategy $s$ such that the parallel iteration of $G_f$ from the
433 initial point $(s,x)$ reaches the point $x'$.
434 We have then proven in \cite{bcgr11:ip} that,
438 \label{Th:Caractérisation des IC chaotiques}
439 Let $f:\mathds{B}^\mathsf{N}\to\mathds{B}^\mathsf{N}$. $G_f$ is chaotic (according to Devaney)
440 if and only if $\Gamma(f)$ is strongly connected.
443 Finally, we have established in \cite{bcgr11:ip} that,
445 Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its
446 iteration graph, $\check{M}$ its adjacency
448 a $n\times n$ matrix defined by
450 M_{ij} = \frac{1}{n}\check{M}_{ij}$ %\textrm{
452 $M_{ii} = 1 - \frac{1}{n} \sum\limits_{j=1, j\neq i}^n \check{M}_{ij}$ otherwise.
454 If $\Gamma(f)$ is strongly connected, then
455 the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows
456 a law that tends to the uniform distribution
457 if and only if $M$ is a double stochastic matrix.
461 These results of chaos and uniform distribution have led us to study the possibility of building a
462 pseudorandom number generator (PRNG) based on the chaotic iterations.
463 As $G_f$, defined on the domain $\llbracket 1 ; \mathsf{N} \rrbracket^{\mathds{N}}
464 \times \mathds{B}^\mathsf{N}$, is built from Boolean networks $f : \mathds{B}^\mathsf{N}
465 \rightarrow \mathds{B}^\mathsf{N}$, we can preserve the theoretical properties on $G_f$
466 during implementations (due to the discrete nature of $f$). Indeed, it is as if
467 $\mathds{B}^\mathsf{N}$ represents the memory of the computer whereas $\llbracket 1 ; \mathsf{N}
468 \rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance, in PRNG, or a physical noise in TRNG).
469 Let us finally remark that the vectorial negation satisfies the hypotheses of both theorems above.
471 \section{Application to Pseudorandomness}
472 \label{sec:pseudorandom}
474 \subsection{A First Pseudorandom Number Generator}
476 We have proposed in~\cite{bgw09:ip} a new family of generators that receives
477 two PRNGs as inputs. These two generators are mixed with chaotic iterations,
478 leading thus to a new PRNG that
480 should improves the statistical properties of each
481 generator taken alone.
482 Furthermore, the generator obtained by this way possesses various chaos properties that none of the generators used as input
487 \begin{algorithm}[h!]
489 \KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$
491 \KwOut{a configuration $x$ ($n$ bits)}
493 $k\leftarrow b + PRNG_1(b)$\;
496 $s\leftarrow{PRNG_2(n)}$\;
497 $x\leftarrow{F_f(s,x)}$\;
501 \caption{An arbitrary round of $Old~ CI~ PRNG_f(PRNG_1,PRNG_2)$}
508 This generator is synthesized in Algorithm~\ref{CI Algorithm}.
509 It takes as input: a Boolean function $f$ satisfying Theorem~\ref{Th:Caractérisation des IC chaotiques};
510 an integer $b$, ensuring that the number of executed iterations
511 between two outputs is at least $b$
512 and at most $2b+1$; and an initial configuration $x^0$.
513 It returns the new generated configuration $x$. Internally, it embeds two
514 inputted generators $PRNG_i(k), i=1,2$,
515 which must return integers
516 uniformly distributed
517 into $\llbracket 1 ; k \rrbracket$.
518 For instance, these PRNGs can be the \textit{XORshift}~\cite{Marsaglia2003},
519 being a category of very fast PRNGs designed by George Marsaglia
520 that repeatedly uses the transform of exclusive or (XOR, $\oplus$) on a number
521 with a bit shifted version of it. Such a PRNG, which has a period of
522 $2^{32}-1=4.29\times10^9$, is summed up in Algorithm~\ref{XORshift}.
523 This XORshift, or any other reasonable PRNG, is used
524 in our own generator to compute both the number of iterations between two
525 outputs (provided by $PRNG_1$) and the strategy elements ($PRNG_2$).
527 %This former generator has successively passed various batteries of statistical tests, as the NIST~\cite{bcgr11:ip}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} ones.
530 \begin{algorithm}[h!]
532 \KwIn{the internal configuration $z$ (a 32-bit word)}
533 \KwOut{$y$ (a 32-bit word)}
534 $z\leftarrow{z\oplus{(z\ll13)}}$\;
535 $z\leftarrow{z\oplus{(z\gg17)}}$\;
536 $z\leftarrow{z\oplus{(z\ll5)}}$\;
540 \caption{An arbitrary round of \textit{XORshift} algorithm}
545 \subsection{A ``New CI PRNG''}
547 In order to make the Old CI PRNG usable in practice, we have proposed
548 an adapted version of the chaotic iteration based generator in~\cite{bg10:ip}.
549 In this ``New CI PRNG'', we prevent from changing twice a given
550 bit between two outputs.
551 This new generator is designed by the following process.
553 First of all, some chaotic iterations have to be done to generate a sequence
554 $\left(x^n\right)_{n\in\mathds{N}} \in \left(\mathds{B}^{32}\right)^\mathds{N}$
555 of Boolean vectors, which are the successive states of the iterated system.
556 Some of these vectors will be randomly extracted and our pseudo-random bit
557 flow will be constituted by their components. Such chaotic iterations are
558 realized as follows. Initial state $x^0 \in \mathds{B}^{32}$ is a Boolean
559 vector taken as a seed and chaotic strategy $\left(S^n\right)_{n\in\mathds{N}}\in
560 \llbracket 1, 32 \rrbracket^\mathds{N}$ is
561 an \emph{irregular decimation} of $PRNG_2$ sequence, as described in
562 Algorithm~\ref{Chaotic iteration1}.
564 Then, at each iteration, only the $S^n$-th component of state $x^n$ is
565 updated, as follows: $x_i^n = x_i^{n-1}$ if $i \neq S^n$, else $x_i^n = \overline{x_i^{n-1}}$.
566 Such a procedure is equivalent to achieve chaotic iterations with
567 the Boolean vectorial negation $f_0$ and some well-chosen strategies.
568 Finally, some $x^n$ are selected
569 by a sequence $m^n$ as the pseudo-random bit sequence of our generator.
570 $(m^n)_{n \in \mathds{N}} \in \mathcal{M}^\mathds{N}$ is computed from $PRNG_1$, where $\mathcal{M}\subset \mathds{N}^*$ is a finite nonempty set of integers.
572 The basic design procedure of the New CI generator is summarized in Algorithm~\ref{Chaotic iteration1}.
573 The internal state is $x$, the output state is $r$. $a$ and $b$ are those computed by the two input
574 PRNGs. Lastly, the value $g(a)$ is an integer defined as in Eq.~\ref{Formula}.
575 This function is required to make the outputs uniform in $\llbracket 0, 2^\mathsf{N}-1 \rrbracket$
576 (the reader is referred to~\cite{bg10:ip} for more information).
583 0 \text{ if }0 \leqslant{y^n}<{C^0_{32}},\\
584 1 \text{ if }{C^0_{32}} \leqslant{y^n}<\sum_{i=0}^1{C^i_{32}},\\
585 2 \text{ if }\sum_{i=0}^1{C^i_{32}} \leqslant{y^n}<\sum_{i=0}^2{C^i_{32}},\\
586 \vdots~~~~~ ~~\vdots~~~ ~~~~\\
587 N \text{ if }\sum_{i=0}^{N-1}{C^i_{32}}\leqslant{y^n}<1.\\
593 \textbf{Input:} the internal state $x$ (32 bits)\\
594 \textbf{Output:} a state $r$ of 32 bits
595 \begin{algorithmic}[1]
598 \STATE$d_i\leftarrow{0}$\;
601 \STATE$a\leftarrow{PRNG_1()}$\;
602 \STATE$m\leftarrow{g(a)}$\;
603 \STATE$k\leftarrow{m}$\;
604 \WHILE{$i=0,\dots,k$}
606 \STATE$b\leftarrow{PRNG_2()~mod~\mathsf{N}}$\;
607 \STATE$S\leftarrow{b}$\;
610 \STATE $x_S\leftarrow{ \overline{x_S}}$\;
611 \STATE $d_S\leftarrow{1}$\;
616 \STATE $k\leftarrow{ k+1}$\;
619 \STATE $r\leftarrow{x}$\;
622 \caption{An arbitrary round of the new CI generator}
623 \label{Chaotic iteration1}
628 \subsection{Improving the Speed of the Former Generator}
630 Instead of updating only one cell at each iteration,\begin{color}{red} we now propose to choose a
631 subset of components and to update them together, for speed improvements. Such a proposition leads\end{color}
632 to a kind of merger of the two sequences used in Algorithms
633 \ref{CI Algorithm} and \ref{Chaotic iteration1}. When the updating function is the vectorial negation,
634 this algorithm can be rewritten as follows:
639 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
640 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus S^n,
643 \label{equation Oplus0}
645 where $\oplus$ is for the bitwise exclusive or between two integers.
646 This rewriting can be understood as follows. The $n-$th term $S^n$ of the
647 sequence $S$, which is an integer of $\mathsf{N}$ binary digits, presents
648 the list of cells to update in the state $x^n$ of the system (represented
649 as an integer having $\mathsf{N}$ bits too). More precisely, the $k-$th
650 component of this state (a binary digit) changes if and only if the $k-$th
651 digit in the binary decomposition of $S^n$ is 1.
653 The single basic component presented in Eq.~\ref{equation Oplus0} is of
654 ordinary use as a good elementary brick in various PRNGs. It corresponds
655 to the following discrete dynamical system in chaotic iterations:
658 \forall n\in \mathds{N}^{\ast }, \forall i\in
659 \llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
661 x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
662 \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
666 where $f$ is the vectorial negation and $\forall n \in \mathds{N}$,
667 $\mathcal{S}^n \subset \llbracket 1, \mathsf{N} \rrbracket$ is such that
668 $k \in \mathcal{S}^n$ if and only if the $k-$th digit in the binary
669 decomposition of $S^n$ is 1. Such chaotic iterations are more general
670 than the ones presented in Definition \ref{Def:chaotic iterations} because, instead of updating only one term at each iteration,
671 we select a subset of components to change.
674 Obviously, replacing the previous CI PRNG Algorithms by
675 Equation~\ref{equation Oplus0}, which is possible when the iteration function is
676 the vectorial negation, leads to a speed improvement. However, proofs
677 of chaos obtained in~\cite{bg10:ij} have been established
678 only for chaotic iterations of the form presented in Definition
679 \ref{Def:chaotic iterations}. The question is now to determine whether the
680 use of more general chaotic iterations to generate pseudorandom numbers
681 faster, does not deflate their topological chaos properties.
683 \subsection{Proofs of Chaos of the General Formulation of the Chaotic Iterations}
685 Let us consider the discrete dynamical systems in chaotic iterations having
686 the general form: $\forall n\in \mathds{N}^{\ast }$, $ \forall i\in
687 \llbracket1;\mathsf{N}\rrbracket $,
692 x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
693 \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
698 In other words, at the $n^{th}$ iteration, only the cells whose id is
699 contained into the set $S^{n}$ are iterated.
701 Let us now rewrite these general chaotic iterations as usual discrete dynamical
702 system of the form $X^{n+1}=f(X^n)$ on an ad hoc metric space. Such a formulation
703 is required in order to study the topological behavior of the system.
705 Let us introduce the following function:
708 \chi: & \llbracket 1; \mathsf{N} \rrbracket \times \mathcal{P}\left(\llbracket 1; \mathsf{N} \rrbracket\right) & \longrightarrow & \mathds{B}\\
709 & (i,X) & \longmapsto & \left\{ \begin{array}{ll} 0 & \textrm{if }i \notin X, \\ 1 & \textrm{if }i \in X, \end{array}\right.
712 where $\mathcal{P}\left(X\right)$ is for the powerset of the set $X$, that is, $Y \in \mathcal{P}\left(X\right) \Longleftrightarrow Y \subset X$.
714 Given a function $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, define the function:
715 $F_{f}: \mathcal{P}\left(\llbracket1;\mathsf{N}\rrbracket \right) \times \mathds{B}^{\mathsf{N}}
716 \longrightarrow \mathds{B}^{\mathsf{N}}$
719 (P,E) & \longmapsto & \left( E_{j}.\chi (j,P)+f(E)_{j}.\overline{\chi(j,P)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket}%
722 where + and . are the Boolean addition and product operations, and $\overline{x}$
723 is the negation of the Boolean $x$.
724 Consider the phase space:
726 \mathcal{X} = \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N} \times
727 \mathds{B}^\mathsf{N},
729 \noindent and the map defined on $\mathcal{X}$:
731 G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), %\label{Gf} %%RAPH, j'ai viré ce label qui existe déjà avant...
733 \noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
734 (S^{n})_{n\in \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow (S^{n+1})_{n\in
735 \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}$ and $i$ is the \emph{initial function}
736 $i:(S^{n})_{n\in \mathds{N}} \in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow S^{0}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)$.
737 Then the general chaotic iterations defined in Equation \ref{general CIs} can
738 be described by the following discrete dynamical system:
742 X^0 \in \mathcal{X} \\
748 Once more, a shift function appears as a component of these general chaotic
751 To study the Devaney's chaos property, a distance between two points
752 $X = (S,E), Y = (\check{S},\check{E})$ of $\mathcal{X}$ must be defined.
755 d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
758 \noindent where $ \displaystyle{d_{e}(E,\check{E})} = \displaystyle{\sum_{k=1}^{\mathsf{N}%
759 }\delta (E_{k},\check{E}_{k})}$ is once more the Hamming distance, and
760 $ \displaystyle{d_{s}(S,\check{S})} = \displaystyle{\dfrac{9}{\mathsf{N}}%
761 \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}$,
762 %%RAPH : ici, j'ai supprimé tous les sauts à la ligne
765 %% \begin{array}{lll}
766 %% \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
767 %% }\delta (E_{k},\check{E}_{k})} \textrm{ is once more the Hamming distance}, \\
768 %% \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
769 %% \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}.%
773 where $|X|$ is the cardinality of a set $X$ and $A\Delta B$ is for the symmetric difference, defined for sets A, B as
774 $A\,\Delta\,B = (A \setminus B) \cup (B \setminus A)$.
778 The function $d$ defined in Eq.~\ref{nouveau d} is a metric on $\mathcal{X}$.
782 $d_e$ is the Hamming distance. We will prove that $d_s$ is a distance
783 too, thus $d$, as being the sum of two distances, will also be a distance.
785 \item Obviously, $d_s(S,\check{S})\geqslant 0$, and if $S=\check{S}$, then
786 $d_s(S,\check{S})=0$. Conversely, if $d_s(S,\check{S})=0$, then
787 $\forall k \in \mathds{N}, |S^k\Delta {S}^k|=0$, and so $\forall k, S^k=\check{S}^k$.
788 \item $d_s$ is symmetric
789 ($d_s(S,\check{S})=d_s(\check{S},S)$) due to the commutative property
790 of the symmetric difference.
791 \item Finally, $|S \Delta S''| = |(S \Delta \varnothing) \Delta S''|= |S \Delta (S'\Delta S') \Delta S''|= |(S \Delta S') \Delta (S' \Delta S'')|\leqslant |S \Delta S'| + |S' \Delta S''|$,
792 and so for all subsets $S,S',$ and $S''$ of $\llbracket 1, \mathsf{N} \rrbracket$,
793 we have $d_s(S,S'') \leqslant d_e(S,S')+d_s(S',S'')$, and the triangle
794 inequality is obtained.
799 Before being able to study the topological behavior of the general
800 chaotic iterations, we must first establish that:
803 For all $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, the function $G_f$ is continuous on
804 $\left( \mathcal{X},d\right)$.
809 We use the sequential continuity.
810 Let $(S^n,E^n)_{n\in \mathds{N}}$ be a sequence of the phase space $%
811 \mathcal{X}$, which converges to $(S,E)$. We will prove that $\left(
812 G_{f}(S^n,E^n)\right) _{n\in \mathds{N}}$ converges to $\left(
813 G_{f}(S,E)\right) $. Let us remark that for all $n$, $S^n$ is a strategy,
814 thus, we consider a sequence of strategies (\emph{i.e.}, a sequence of
816 As $d((S^n,E^n);(S,E))$ converges to 0, each distance $d_{e}(E^n,E)$ and $d_{s}(S^n,S)$ converges
817 to 0. But $d_{e}(E^n,E)$ is an integer, so $\exists n_{0}\in \mathds{N},$ $%
818 d_{e}(E^n,E)=0$ for any $n\geqslant n_{0}$.\newline
819 In other words, there exists a threshold $n_{0}\in \mathds{N}$ after which no
820 cell will change its state:
821 $\exists n_{0}\in \mathds{N},n\geqslant n_{0}\Rightarrow E^n = E.$
823 In addition, $d_{s}(S^n,S)\longrightarrow 0,$ so $\exists n_{1}\in %
824 \mathds{N},d_{s}(S^n,S)<10^{-1}$ for all indexes greater than or equal to $%
825 n_{1}$. This means that for $n\geqslant n_{1}$, all the $S^n$ have the same
826 first term, which is $S^0$: $\forall n\geqslant n_{1},S_0^n=S_0.$
828 Thus, after the $max(n_{0},n_{1})^{th}$ term, states of $E^n$ and $E$ are
829 identical and strategies $S^n$ and $S$ start with the same first term.\newline
830 Consequently, states of $G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are equal,
831 so, after the $max(n_0, n_1)^{th}$ term, the distance $d$ between these two points is strictly less than 1.\newline
832 \noindent We now prove that the distance between $\left(
833 G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is convergent to
834 0. Let $\varepsilon >0$. \medskip
836 \item If $\varepsilon \geqslant 1$, we see that the distance
837 between $\left( G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is
838 strictly less than 1 after the $max(n_{0},n_{1})^{th}$ term (same state).
840 \item If $\varepsilon <1$, then $\exists k\in \mathds{N},10^{-k}\geqslant
841 \varepsilon > 10^{-(k+1)}$. But $d_{s}(S^n,S)$ converges to 0, so
843 \exists n_{2}\in \mathds{N},\forall n\geqslant
844 n_{2},d_{s}(S^n,S)<10^{-(k+2)},
846 thus after $n_{2}$, the $k+2$ first terms of $S^n$ and $S$ are equal.
848 \noindent As a consequence, the $k+1$ first entries of the strategies of $%
849 G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are the same ($G_{f}$ is a shift of strategies) and due to the definition of $d_{s}$, the floating part of
850 the distance between $(S^n,E^n)$ and $(S,E)$ is strictly less than $%
851 10^{-(k+1)}\leqslant \varepsilon $.
854 %%RAPH : ici j'ai rajouté une ligne
856 \forall \varepsilon >0,$ $\exists N_{0}=max(n_{0},n_{1},n_{2})\in \mathds{N}
857 ,$ $\forall n\geqslant N_{0},$
858 $ d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right)
859 \leqslant \varepsilon .
861 $G_{f}$ is consequently continuous.
865 It is now possible to study the topological behavior of the general chaotic
866 iterations. We will prove that,
869 \label{t:chaos des general}
870 The general chaotic iterations defined on Equation~\ref{general CIs} satisfy
871 the Devaney's property of chaos.
874 Let us firstly prove the following lemma.
876 \begin{lemma}[Strong transitivity]
878 For all couples $X,Y \in \mathcal{X}$ and any neighborhood $V$ of $X$, we can
879 find $n \in \mathds{N}^*$ and $X' \in V$ such that $G^n(X')=Y$.
883 Let $X=(S,E)$, $\varepsilon>0$, and $k_0 = \lfloor log_{10}(\varepsilon)+1 \rfloor$.
884 Any point $X'=(S',E')$ such that $E'=E$ and $\forall k \leqslant k_0, S'^k=S^k$,
885 are in the open ball $\mathcal{B}\left(X,\varepsilon\right)$. Let us define
886 $\check{X} = \left(\check{S},\check{E}\right)$, where $\check{X}= G^{k_0}(X)$.
887 We denote by $s\subset \llbracket 1; \mathsf{N} \rrbracket$ the set of coordinates
888 that are different between $\check{E}$ and the state of $Y$. Thus each point $X'$ of
889 the form $(S',E')$ where $E'=E$ and $S'$ starts with
890 $(S^0, S^1, \hdots, S^{k_0},s,\hdots)$, verifies the following properties:
892 \item $X'$ is in $\mathcal{B}\left(X,\varepsilon\right)$,
893 \item the state of $G_f^{k_0+1}(X')$ is the state of $Y$.
895 Finally the point $\left(\left(S^0, S^1, \hdots, S^{k_0},s,s^0, s^1, \hdots\right); E\right)$,
896 where $(s^0,s^1, \hdots)$ is the strategy of $Y$, satisfies the properties
897 claimed in the lemma.
900 We can now prove the Theorem~\ref{t:chaos des general}.
902 \begin{proof}[Theorem~\ref{t:chaos des general}]
903 Firstly, strong transitivity implies transitivity.
905 Let $(S,E) \in\mathcal{X}$ and $\varepsilon >0$. To
906 prove that $G_f$ is regular, it is sufficient to prove that
907 there exists a strategy $\tilde S$ such that the distance between
908 $(\tilde S,E)$ and $(S,E)$ is less than $\varepsilon$, and such that
909 $(\tilde S,E)$ is a periodic point.
911 Let $t_1=\lfloor-\log_{10}(\varepsilon)\rfloor$, and let $E'$ be the
912 configuration that we obtain from $(S,E)$ after $t_1$ iterations of
913 $G_f$. As $G_f$ is strongly transitive, there exists a strategy $S'$
914 and $t_2\in\mathds{N}$ such
915 that $E$ is reached from $(S',E')$ after $t_2$ iterations of $G_f$.
917 Consider the strategy $\tilde S$ that alternates the first $t_1$ terms
918 of $S$ and the first $t_2$ terms of $S'$:
919 %%RAPH : j'ai coupé la ligne en 2
921 S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,$$$$\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It
922 is clear that $(\tilde S,E)$ is obtained from $(\tilde S,E)$ after
923 $t_1+t_2$ iterations of $G_f$. So $(\tilde S,E)$ is a periodic
924 point. Since $\tilde S_t=S_t$ for $t<t_1$, by the choice of $t_1$, we
925 have $d((S,E),(\tilde S,E))<\epsilon$.
930 \section{Statistical Improvements Using Chaotic Iterations}
932 \subsection{About some Well-known PRNGs}
933 \label{The generation of pseudo-random sequence}
938 Let us now give illustration on the fact that chaos appears to improve statistical properties.
940 \subsection{Details of some Existing Generators}
942 Here are the modules of PRNGs we have chosen to experiment.
945 This PRNG implements either the simple or the combined linear congruency generator (LCGs). The simple LCG is defined by the recurrence:
947 x^n = (ax^{n-1} + c)~mod~m
950 where $a$, $c$, and $x^0$ must be, among other things, non-negative and less than $m$~\cite{testU01}. In what follows, 2LCGs and 3LCGs refer as two (resp. three) combinations of such LCGs.
951 For further details, see~\cite{combined_lcg}.
954 This module implements multiple recursive generators (MRGs), based on a linear recurrence of order $k$, modulo $m$~\cite{testU01}:
956 x^n = (a^1x^{n-1}+~...~+a^kx^{n-k})~mod~m
959 Combination of two MRGs (referred as 2MRGs) is also be used in this paper.
961 \subsubsection{UCARRY}
962 Generators based on linear recurrences with carry are implemented in this module. This includes the add-with-carry (AWC) generator, based on the recurrence:
966 x^n = (x^{n-r} + x^{n-s} + c^{n-1})~mod~m, \\
967 c^n= (x^{n-r} + x^{n-s} + c^{n-1}) / m, \end{array}\end{equation}
968 the SWB generator, having the recurrence:
972 x^n = (x^{n-r} - x^{n-s} - c^{n-1})~mod~m, \\
975 1 ~~~~~\text{if}~ (x^{i-r} - x^{i-s} - c^{i-1})<0\\
976 0 ~~~~~\text{else},\end{array} \right. \end{array}\end{equation}
977 and the SWC generator designed by R. Couture, which is based on the following recurrence:
981 x^n = (a^1x^{n-1} \oplus ~...~ \oplus a^rx^{n-r} \oplus c^{n-1}) ~ mod ~ 2^w, \\
982 c^n = (a^1x^{n-1} \oplus ~...~ \oplus a^rx^{n-r} \oplus c^{n-1}) ~ / ~ 2^w. \end{array}\end{equation}
985 This module implements the generalized feedback shift register (GFSR) generator, that is:
987 x^n = x^{n-r} \oplus x^{n-k}
993 Finally, this module implements the nonlinear inversive generator, as defined in~\cite{testU01}, which is:
1000 (a^1 + a^2 / z^{n-1})~mod~m & \text{if}~ z^{n-1} \neq 0 \\
1001 a^1 & \text{if}~ z^{n-1} = 0 .\end{array} \right. \end{array}\end{equation}
1007 \subsection{Statistical tests}
1008 \label{Security analysis}
1010 %A theoretical proof for the randomness of a generator is impossible to give, therefore statistical inference based on observed sample sequences produced by the generator seems to be the best option.
1011 Considering the properties of binary random sequences, various statistical tests can be designed to evaluate the assertion that the sequence is generated by a perfectly random source. We have performed some statistical tests for the CIPRNGs proposed here. These tests include NIST suite~\cite{ANDREW2008} and DieHARD battery of tests~\cite{DieHARD}. For completeness and for reference, we give in the following subsection a brief description of each of the aforementioned tests.
1015 \subsubsection{NIST statistical tests suite}
1017 Among the numerous standard tests for pseudo-randomness, a convincing way to show the randomness of the produced sequences is to confront them to the NIST (National Institute of Standards and Technology) statistical tests, being an up-to-date tests suite proposed by the Information Technology Laboratory (ITL). A new version of the Statistical tests suite has been released in August 11, 2010.
1019 The NIST tests suite SP 800-22 is a statistical package consisting of 15 tests. They were developed to test the randomness of binary sequences produced by hardware or software based cryptographic pseudorandom number generators. These tests focus on a variety of different types of non-randomness that could exist in a sequence.
1021 For each statistical test, a set of $P-values$ (corresponding to the set of sequences) is produced.
1022 The interpretation of empirical results can be conducted in various ways.
1023 In this paper, the examination of the distribution of P-values to check for uniformity ($ P-value_{T}$) is used.
1024 The distribution of $P-values$ is examined to ensure uniformity.
1025 If $P-value_{T} \geqslant 0.0001$, then the sequences can be considered to be uniformly distributed.
1027 In our experiments, 100 sequences (s = 100), each with 1,000,000-bit long, are generated and tested. If the $P-value_{T}$ of any test is smaller than 0.0001, the sequences are considered to be not good enough and the generating algorithm is not suitable for usage.
1033 \subsubsection{DieHARD battery of tests}
1034 The DieHARD battery of tests has been the most sophisticated standard for over a decade. Because of the stringent requirements in the DieHARD tests suite, a generator passing this battery of
1035 tests can be considered good as a rule of thumb.
1037 The DieHARD battery of tests consists of 18 different independent statistical tests. This collection
1038 of tests is based on assessing the randomness of bits comprising 32-bit integers obtained from
1039 a random number generator. Each test requires $2^{23}$ 32-bit integers in order to run the full set
1040 of tests. Most of the tests in DieHARD return a $P-value$, which should be uniform on $[0,1)$ if the input file
1041 contains truly independent random bits. These $P-values$ are obtained by
1042 $P=F(X)$, where $F$ is the assumed distribution of the sample random variable $X$ (often normal).
1043 But that assumed $F$ is just an asymptotic approximation, for which the fit will be worst
1044 in the tails. Thus occasional $P-values$ near 0 or 1, such as 0.0012 or 0.9983, can occur.
1045 An individual test is considered to be failed if the $P-value$ approaches 1 closely, for example $P>0.9999$.
1048 \subsection{Results and discussion}
1049 \label{Results and discussion}
1051 \renewcommand{\arraystretch}{1.3}
1052 \caption{NIST and DieHARD tests suite passing rates for PRNGs without CI}
1053 \label{NIST and DieHARD tests suite passing rate the for PRNGs without CI}
1055 \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c|}
1057 Types of PRNGs & \multicolumn{2}{c|}{Linear PRNGs} & \multicolumn{4}{c|}{Lagged PRNGs} & \multicolumn{1}{c|}{ICG PRNGs} & \multicolumn{3}{c|}{Mixed PRNGs}\\ \hline
1058 \backslashbox{\textbf{$Tests$}} {\textbf{$PRNG$}} & LCG& MRG& AWC & SWB & SWC & GFSR & INV & LCG2& LCG3& MRG2 \\ \hline
1059 NIST & 11/15 & 14/15 &\textbf{15/15} & \textbf{15/15} & 14/15 & 14/15 & 14/15 & 14/15& 14/15& 14/15 \\ \hline
1060 DieHARD & 16/18 & 16/18 & 15/18 & 16/18 & \textbf{18/18} & 16/18 & 16/18 & 16/18& 16/18& 16/18\\ \hline
1064 Table~\ref{NIST and DieHARD tests suite passing rate the for PRNGs without CI} shows the results on the batteries recalled above, indicating that almost all the PRNGs cannot pass all their tests. In other words, the statistical quality of these PRNGs cannot fulfill the up-to-date standards presented previously. We will show that the CIPRNG can solve this issue.
1066 To illustrate the effects of this CIPRNG in detail, experiments will be divided in three parts:
1068 \item \textbf{Single CIPRNG}: The PRNGs involved in CI computing are of the same category.
1069 \item \textbf{Mixed CIPRNG}: Two different types of PRNGs are mixed during the chaotic iterations process.
1070 \item \textbf{Multiple CIPRNG}: The generator is obtained by repeating the composition of the iteration function as follows: $x^0\in \mathds{B}^{\mathsf{N}}$, and $\forall n\in \mathds{N}^{\ast },\forall i\in \llbracket1;\mathsf{N}\rrbracket,$
1075 x_i^{n-1}~~~~~\text{if}~S^n\neq i \\
1076 \forall j\in \llbracket1;\mathsf{m}\rrbracket,f^m(x^{n-1})_{S^{nm+j}}~\text{if}~S^{nm+j}=i.\end{array} \right. \end{array}
1078 $m$ is called the \emph{functional power}.
1082 We have performed statistical analysis of each of the aforementioned CIPRNGs.
1083 The results are reproduced in Tables~\ref{NIST and DieHARD tests suite passing rate the for PRNGs without CI} and \ref{NIST and DieHARD tests suite passing rate the for single CIPRNGs}.
1084 The scores written in boldface indicate that all the tests have been passed successfully, whereas an asterisk ``*'' means that the considered passing rate has been improved.
1086 \subsubsection{Tests based on the Single CIPRNG}
1089 \renewcommand{\arraystretch}{1.3}
1090 \caption{NIST and DieHARD tests suite passing rates for PRNGs with CI}
1091 \label{NIST and DieHARD tests suite passing rate the for single CIPRNGs}
1093 \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c|c|c|}
1095 Types of PRNGs & \multicolumn{2}{c|}{Linear PRNGs} & \multicolumn{4}{c|}{Lagged PRNGs} & \multicolumn{1}{c|}{ICG PRNGs} & \multicolumn{3}{c|}{Mixed PRNGs}\\ \hline
1096 \backslashbox{\textbf{$Tests$}} {\textbf{$Single~CIPRNG$}} & LCG & MRG & AWC & SWB & SWC & GFSR & INV& LCG2 & LCG3& MRG2 \\ \hline\hline
1097 Old CIPRNG\\ \hline \hline
1098 NIST & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} & \textbf{15/15} & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} *& \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} \\ \hline
1099 DieHARD & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} & \textbf{18/18} * & \textbf{18/18} *& \textbf{18/18} * & \textbf{18/18} *& \textbf{18/18} * \\ \hline
1100 New CIPRNG\\ \hline \hline
1101 NIST & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} & \textbf{15/15} & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} *& \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} \\ \hline
1102 DieHARD & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} *& \textbf{18/18} *\\ \hline
1103 Xor CIPRNG\\ \hline\hline
1104 NIST & 14/15*& \textbf{15/15} * & \textbf{15/15} & \textbf{15/15} & 14/15 & \textbf{15/15} * & 14/15& \textbf{15/15} * & \textbf{15/15} *& \textbf{15/15} \\ \hline
1105 DieHARD & 16/18 & 16/18 & 17/18* & \textbf{18/18} * & \textbf{18/18} & \textbf{18/18} * & 16/18 & 16/18 & 16/18& 16/18\\ \hline
1109 The statistical tests results of the PRNGs using the single CIPRNG method are given in Table~\ref{NIST and DieHARD tests suite passing rate the for single CIPRNGs}.
1110 We can observe that, except for the Xor CIPRNG, all of the CIPRNGs have passed the 15 tests of the NIST battery and the 18 tests of the DieHARD one.
1111 Moreover, considering these scores, we can deduce that both the single Old CIPRNG and the single New CIPRNG are relatively steadier than the single Xor CIPRNG approach, when applying them to different PRNGs.
1112 However, the Xor CIPRNG is obviously the fastest approach to generate a CI random sequence, and it still improves the statistical properties relative to each generator taken alone, although the test values are not as good as desired.
1114 Therefore, all of these three ways are interesting, for different reasons, in the production of pseudorandom numbers and,
1115 on the whole, the single CIPRNG method can be considered to adapt to or improve all kinds of PRNGs.
1117 To have a realization of the Xor CIPRNG that can pass all the tests embedded into the NIST battery, the Xor CIPRNG with multiple functional powers are investigated in Section~\ref{Tests based on Multiple CIPRNG}.
1120 \subsubsection{Tests based on the Mixed CIPRNG}
1122 To compare the previous approach with the CIPRNG design that uses a Mixed CIPRNG, we have taken into account the same inputted generators than in the previous section.
1123 These inputted couples $(PRNG_1,PRNG_2)$ of PRNGs are used in the Mixed approach as follows:
1127 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
1128 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus PRNG_1\oplus PRNG_2,
1131 \label{equation Oplus}
1134 With this Mixed CIPRNG approach, both the Old CIPRNG and New CIPRNG continue to pass all the NIST and DieHARD suites.
1135 In addition, we can see that the PRNGs using a Xor CIPRNG approach can pass more tests than previously.
1136 The main reason of this success is that the Mixed Xor CIPRNG has a longer period.
1137 Indeed, let $n_{P}$ be the period of a PRNG $P$, then the period deduced from the single Xor CIPRNG approach is obviously equal to:
1142 n_{P}&\text{if~}x^0=x^{n_{P}}\\
1143 2n_{P}&\text{if~}x^0\neq x^{n_{P}}.\\
1146 \label{equation Oplus}
1149 Let us now denote by $n_{P1}$ and $n_{P2}$ the periods of respectively the $PRNG_1$ and $PRNG_2$ generators, then the period of the Mixed Xor CIPRNG will be:
1154 LCM(n_{P1},n_{P2})&\text{if~}x^0=x^{LCM(n_{P1},n_{P2})}\\
1155 2LCM(n_{P1},n_{P2})&\text{if~}x^0\neq x^{LCM(n_{P1},n_{P2})}.\\
1158 \label{equation Oplus}
1161 In Table~\ref{DieHARD fail mixex CIPRNG}, we only show the results for the Mixed CIPRNGs that cannot pass all DieHARD suites (the NIST tests are all passed). It demonstrates that Mixed Xor CIPRNG involving LCG, MRG, LCG2, LCG3, MRG2, or INV cannot pass the two following tests, namely the ``Matrix Rank 32x32'' and the ``COUNT-THE-1's'' tests contained into the DieHARD battery. Let us recall their definitions:
1164 \item \textbf{Matrix Rank 32x32.} A random 32x32 binary matrix is formed, each row having a 32-bit random vector. Its rank is an integer that ranges from 0 to 32. Ranks less than 29 must be rare, and their occurences must be pooled with those of rank 29. To achieve the test, ranks of 40,000 such random matrices are obtained, and a chisquare test is performed on counts for ranks 32,31,30 and for ranks $\leq29$.
1166 \item \textbf{COUNT-THE-1's TEST} Consider the file under test as a stream of bytes (four per 2 bit integer). Each byte can contain from 0 to 8 1's, with probabilities 1,8,28,56,70,56,28,8,1 over 256. Now let the stream of bytes provide a string of overlapping 5-letter words, each ``letter'' taking values A,B,C,D,E. The letters are determined by the number of 1's in a byte: 0,1, or 2 yield A, 3 yields B, 4 yields C, 5 yields D and 6,7, or 8 yield E. Thus we have a monkey at a typewriter hitting five keys with various probabilities (37,56,70,56,37 over 256). There are $5^5$ possible 5-letter words, and from a string of 256,000 (over-lapping) 5-letter words, counts are made on the frequencies for each word. The quadratic form in the weak inverse of the covariance matrix of the cell counts provides a chisquare test: Q5-Q4, the difference of the naive Pearson sums of $(OBS-EXP)^2/EXP$ on counts for 5- and 4-letter cell counts.
1169 The reason of these fails is that the output of LCG, LCG2, LCG3, MRG, and MRG2 under the experiments are in 31-bit. Compare with the Single CIPRNG, using different PRNGs to build CIPRNG seems more efficient in improving random number quality (mixed Xor CI can 100\% pass NIST, but single cannot).
1172 \renewcommand{\arraystretch}{1.3}
1173 \caption{Scores of mixed Xor CIPRNGs when considering the DieHARD battery}
1174 \label{DieHARD fail mixex CIPRNG}
1176 \begin{tabular}{|l||c|c|c|c|c|c|}
1178 \backslashbox{\textbf{$PRNG_1$}} {\textbf{$PRNG_0$}} & LCG & MRG & INV & LCG2 & LCG3 & MRG2 \\ \hline\hline
1179 LCG &\backslashbox{} {} &16/18&16/18 &16/18 &16/18 &16/18\\ \hline
1180 MRG &16/18 &\backslashbox{} {} &16/18&16/18 &16/18 &16/18\\ \hline
1181 INV &16/18 &16/18&\backslashbox{} {} &16/18 &16/18&16/18 \\ \hline
1182 LCG2 &16/18 &16/18 &16/18 &\backslashbox{} {} &16/18&16/18\\ \hline
1183 LCG3 &16/18 &16/18 &16/18&16/18&\backslashbox{} {} &16/18\\ \hline
1184 MRG2 &16/18 &16/18 &16/18&16/18 &16/18 &\backslashbox{} {} \\ \hline
1188 \subsubsection{Tests based on the Multiple CIPRNG}
1189 \label{Tests based on Multiple CIPRNG}
1191 Until now, the combination of at most two input PRNGs has been investigated.
1192 We now regard the possibility to use a larger number of generators to improve the statistics of the generated pseudorandom numbers, leading to the multiple functional power approach.
1193 For the CIPRNGs which have already pass both the NIST and DieHARD suites with 2 inputted PRNGs (all the Old and New CIPRNGs, and some of the Xor CIPRNGs), it is not meaningful to consider their adaption of this multiple CIPRNG method, hence only the Multiple Xor CIPRNGs, having the following form, will be investigated.
1197 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
1198 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus S^{nm}\oplus S^{nm+1}\ldots \oplus S^{nm+m-1} ,
1201 \label{equation Oplus}
1204 The question is now to determine the value of the threshold $m$ (the functional power) making the multiple CIPRNG being able to pass the whole NIST battery.
1205 Such a question is answered in Table~\ref{threshold}.
1209 \renewcommand{\arraystretch}{1.3}
1210 \caption{Functional power $m$ making it possible to pass the whole NIST battery}
1213 \begin{tabular}{|l||c|c|c|c|c|c|c|c|}
1215 Inputted $PRNG$ & LCG & MRG & SWC & GFSR & INV& LCG2 & LCG3 & MRG2 \\ \hline\hline
1216 Threshold value $m$& 19 & 7 & 2& 1 & 11& 9& 3& 4\\ \hline\hline
1220 \subsubsection{Results Summary}
1222 We can summarize the obtained results as follows.
1224 \item The CIPRNG method is able to improve the statistical properties of a large variety of PRNGs.
1225 \item Using different PRNGs in the CIPRNG approach is better than considering several instances of one unique PRNG.
1226 \item The statistical quality of the outputs increases with the functional power $m$.
1231 \section{Efficient PRNG based on Chaotic Iterations}
1232 \label{sec:efficient PRNG}
1234 Based on the proof presented in the previous section, it is now possible to
1235 improve the speed of the generator formerly presented in~\cite{bgw09:ip,guyeux10}.
1236 The first idea is to consider
1237 that the provided strategy is a pseudorandom Boolean vector obtained by a
1239 An iteration of the system is simply the bitwise exclusive or between
1240 the last computed state and the current strategy.
1241 Topological properties of disorder exhibited by chaotic
1242 iterations can be inherited by the inputted generator, we hope by doing so to
1243 obtain some statistical improvements while preserving speed.
1245 %%RAPH : j'ai viré tout ca
1246 %% Let us give an example using 16-bits numbers, to clearly understand how the bitwise xor operations
1249 %% Suppose that $x$ and the strategy $S^i$ are given as
1251 %% Table~\ref{TableExemple} shows the result of $x \oplus S^i$.
1254 %% \begin{scriptsize}
1256 %% \begin{array}{|cc|cccccccccccccccc|}
1258 %% x &=&1&0&1&1&1&0&1&0&1&0&0&1&0&0&1&0\\
1260 %% S^i &=&0&1&1&0&0&1&1&0&1&1&1&0&0&1&1&1\\
1262 %% x \oplus S^i&=&1&1&0&1&1&1&0&0&0&1&1&1&0&1&0&1\\
1269 %% \caption{Example of an arbitrary round of the proposed generator}
1270 %% \label{TableExemple}
1276 \lstset{language=C,caption={C code of the sequential PRNG based on chaotic iterations},label=algo:seqCIPRNG}
1280 unsigned int CIPRNG() {
1281 static unsigned int x = 123123123;
1282 unsigned long t1 = xorshift();
1283 unsigned long t2 = xor128();
1284 unsigned long t3 = xorwow();
1285 x = x^(unsigned int)t1;
1286 x = x^(unsigned int)(t2>>32);
1287 x = x^(unsigned int)(t3>>32);
1288 x = x^(unsigned int)t2;
1289 x = x^(unsigned int)(t1>>32);
1290 x = x^(unsigned int)t3;
1298 In Listing~\ref{algo:seqCIPRNG} a sequential version of the proposed PRNG based
1299 on chaotic iterations is presented. The xor operator is represented by
1300 \textasciicircum. This function uses three classical 64-bits PRNGs, namely the
1301 \texttt{xorshift}, the \texttt{xor128}, and the
1302 \texttt{xorwow}~\cite{Marsaglia2003}. In the following, we call them ``xor-like
1303 PRNGs''. As each xor-like PRNG uses 64-bits whereas our proposed generator
1304 works with 32-bits, we use the command \texttt{(unsigned int)}, that selects the
1305 32 least significant bits of a given integer, and the code \texttt{(unsigned
1306 int)(t$>>$32)} in order to obtain the 32 most significant bits of \texttt{t}.
1308 Thus producing a pseudorandom number needs 6 xor operations with 6 32-bits numbers
1309 that are provided by 3 64-bits PRNGs. This version successfully passes the
1310 stringent BigCrush battery of tests~\cite{LEcuyerS07}.
1312 \section{Efficient PRNGs based on Chaotic Iterations on GPU}
1313 \label{sec:efficient PRNG gpu}
1315 In order to take benefits from the computing power of GPU, a program
1316 needs to have independent blocks of threads that can be computed
1317 simultaneously. In general, the larger the number of threads is, the
1318 more local memory is used, and the less branching instructions are
1319 used (if, while, ...), the better the performances on GPU is.
1320 Obviously, having these requirements in mind, it is possible to build
1321 a program similar to the one presented in Listing
1322 \ref{algo:seqCIPRNG}, which computes pseudorandom numbers on GPU. To
1323 do so, we must firstly recall that in the CUDA~\cite{Nvid10}
1324 environment, threads have a local identifier called
1325 \texttt{ThreadIdx}, which is relative to the block containing
1326 them. Furthermore, in CUDA, parts of the code that are executed by the GPU, are
1327 called {\it kernels}.
1330 \subsection{Naive Version for GPU}
1333 It is possible to deduce from the CPU version a quite similar version adapted to GPU.
1334 The simple principle consists in making each thread of the GPU computing the CPU version of our PRNG.
1335 Of course, the three xor-like
1336 PRNGs used in these computations must have different parameters.
1337 In a given thread, these parameters are
1338 randomly picked from another PRNGs.
1339 The initialization stage is performed by the CPU.
1340 To do it, the ISAAC PRNG~\cite{Jenkins96} is used to set all the
1341 parameters embedded into each thread.
1343 The implementation of the three
1344 xor-like PRNGs is straightforward when their parameters have been
1345 allocated in the GPU memory. Each xor-like works with an internal
1346 number $x$ that saves the last generated pseudorandom number. Additionally, the
1347 implementation of the xor128, the xorshift, and the xorwow respectively require
1348 4, 5, and 6 unsigned long as internal variables.
1353 \KwIn{InternalVarXorLikeArray: array with internal variables of the 3 xor-like
1354 PRNGs in global memory\;
1355 NumThreads: number of threads\;}
1356 \KwOut{NewNb: array containing random numbers in global memory}
1357 \If{threadIdx is concerned by the computation} {
1358 retrieve data from InternalVarXorLikeArray[threadIdx] in local variables\;
1360 compute a new PRNG as in Listing\ref{algo:seqCIPRNG}\;
1361 store the new PRNG in NewNb[NumThreads*threadIdx+i]\;
1363 store internal variables in InternalVarXorLikeArray[threadIdx]\;
1366 \caption{Main kernel of the GPU ``naive'' version of the PRNG based on chaotic iterations}
1367 \label{algo:gpu_kernel}
1372 Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of the proposed PRNG on
1373 GPU. Due to the available memory in the GPU and the number of threads
1374 used simultaneously, the number of random numbers that a thread can generate
1375 inside a kernel is limited (\emph{i.e.}, the variable \texttt{n} in
1376 algorithm~\ref{algo:gpu_kernel}). For instance, if $100,000$ threads are used and
1377 if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)},
1378 then the memory required to store all of the internals variables of both the xor-like
1379 PRNGs\footnote{we multiply this number by $2$ in order to count 32-bits numbers}
1380 and the pseudorandom numbers generated by our PRNG, is equal to $100,000\times ((4+5+6)\times
1381 2+(1+100))=1,310,000$ 32-bits numbers, that is, approximately $52$Mb.
1383 This generator is able to pass the whole BigCrush battery of tests, for all
1384 the versions that have been tested depending on their number of threads
1385 (called \texttt{NumThreads} in our algorithm, tested up to $5$ million).
1388 The proposed algorithm has the advantage of manipulating independent
1389 PRNGs, so this version is easily adaptable on a cluster of computers too. The only thing
1390 to ensure is to use a single ISAAC PRNG. To achieve this requirement, a simple solution consists in
1391 using a master node for the initialization. This master node computes the initial parameters
1392 for all the different nodes involved in the computation.
1395 \subsection{Improved Version for GPU}
1397 As GPU cards using CUDA have shared memory between threads of the same block, it
1398 is possible to use this feature in order to simplify the previous algorithm,
1399 i.e., to use less than 3 xor-like PRNGs. The solution consists in computing only
1400 one xor-like PRNG by thread, saving it into the shared memory, and then to use the results
1401 of some other threads in the same block of threads. In order to define which
1402 thread uses the result of which other one, we can use a combination array that
1403 contains the indexes of all threads and for which a combination has been
1406 In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used. The
1407 variable \texttt{offset} is computed using the value of
1408 \texttt{combination\_size}. Then we can compute \texttt{o1} and \texttt{o2}
1409 representing the indexes of the other threads whose results are used by the
1410 current one. In this algorithm, we consider that a 32-bits xor-like PRNG has
1411 been chosen. In practice, we use the xor128 proposed in~\cite{Marsaglia2003} in
1412 which unsigned longs (64 bits) have been replaced by unsigned integers (32
1415 This version can also pass the whole {\it BigCrush} battery of tests.
1419 \KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs
1421 NumThreads: Number of threads\;
1422 array\_comb1, array\_comb2: Arrays containing combinations of size combination\_size\;}
1424 \KwOut{NewNb: array containing random numbers in global memory}
1425 \If{threadId is concerned} {
1426 retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory and x\;
1427 offset = threadIdx\%combination\_size\;
1428 o1 = threadIdx-offset+array\_comb1[offset]\;
1429 o2 = threadIdx-offset+array\_comb2[offset]\;
1432 t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
1433 shared\_mem[threadId]=t\;
1434 x = x\textasciicircum t\;
1436 store the new PRNG in NewNb[NumThreads*threadId+i]\;
1438 store internal variables in InternalVarXorLikeArray[threadId]\;
1441 \caption{Main kernel for the chaotic iterations based PRNG GPU efficient
1443 \label{algo:gpu_kernel2}
1446 \subsection{Theoretical Evaluation of the Improved Version}
1448 A run of Algorithm~\ref{algo:gpu_kernel2} consists in an operation ($x=x\oplus t$) having
1449 the form of Equation~\ref{equation Oplus}, which is equivalent to the iterative
1450 system of Eq.~\ref{eq:generalIC}. That is, an iteration of the general chaotic
1451 iterations is realized between the last stored value $x$ of the thread and a strategy $t$
1452 (obtained by a bitwise exclusive or between a value provided by a xor-like() call
1453 and two values previously obtained by two other threads).
1454 To be certain that we are in the framework of Theorem~\ref{t:chaos des general},
1455 we must guarantee that this dynamical system iterates on the space
1456 $\mathcal{X} = \mathcal{P}\left(\llbracket 1, \mathsf{N} \rrbracket\right)^\mathds{N}\times\mathds{B}^\mathsf{N}$.
1457 The left term $x$ obviously belongs to $\mathds{B}^ \mathsf{N}$.
1458 To prevent from any flaws of chaotic properties, we must check that the right
1459 term (the last $t$), corresponding to the strategies, can possibly be equal to any
1460 integer of $\llbracket 1, \mathsf{N} \rrbracket$.
1462 Such a result is obvious, as for the xor-like(), all the
1463 integers belonging into its interval of definition can occur at each iteration, and thus the
1464 last $t$ respects the requirement. Furthermore, it is possible to
1465 prove by an immediate mathematical induction that, as the initial $x$
1466 is uniformly distributed (it is provided by a cryptographically secure PRNG),
1467 the two other stored values shmem[o1] and shmem[o2] are uniformly distributed too,
1468 (this is the induction hypothesis), and thus the next $x$ is finally uniformly distributed.
1470 Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general
1471 chaotic iterations presented previously, and for this reason, it satisfies the
1472 Devaney's formulation of a chaotic behavior.
1474 \section{Experiments}
1475 \label{sec:experiments}
1477 Different experiments have been performed in order to measure the generation
1478 speed. We have used a first computer equipped with a Tesla C1060 NVidia GPU card
1480 Intel Xeon E5530 cadenced at 2.40 GHz, and
1481 a second computer equipped with a smaller CPU and a GeForce GTX 280.
1483 cards have 240 cores.
1485 In Figure~\ref{fig:time_xorlike_gpu} we compare the quantity of pseudorandom numbers
1486 generated per second with various xor-like based PRNGs. In this figure, the optimized
1487 versions use the {\it xor64} described in~\cite{Marsaglia2003}, whereas the naive versions
1488 embed the three xor-like PRNGs described in Listing~\ref{algo:seqCIPRNG}. In
1489 order to obtain the optimal performances, the storage of pseudorandom numbers
1490 into the GPU memory has been removed. This step is time consuming and slows down the numbers
1491 generation. Moreover this storage is completely
1492 useless, in case of applications that consume the pseudorandom
1493 numbers directly after generation. We can see that when the number of threads is greater
1494 than approximately 30,000 and lower than 5 million, the number of pseudorandom numbers generated
1495 per second is almost constant. With the naive version, this value ranges from 2.5 to
1496 3GSamples/s. With the optimized version, it is approximately equal to
1497 20GSamples/s. Finally we can remark that both GPU cards are quite similar, but in
1498 practice, the Tesla C1060 has more memory than the GTX 280, and this memory
1499 should be of better quality.
1500 As a comparison, Listing~\ref{algo:seqCIPRNG} leads to the generation of about
1501 138MSample/s when using one core of the Xeon E5530.
1503 \begin{figure}[htbp]
1505 \includegraphics[width=\columnwidth]{curve_time_xorlike_gpu.pdf}
1507 \caption{Quantity of pseudorandom numbers generated per second with the xorlike-based PRNG}
1508 \label{fig:time_xorlike_gpu}
1515 In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized
1516 BBS-based PRNG on GPU. On the Tesla C1060 we obtain approximately 700MSample/s
1517 and on the GTX 280 about 670MSample/s, which is obviously slower than the
1518 xorlike-based PRNG on GPU. However, we will show in the next sections that this
1519 new PRNG has a strong level of security, which is necessarily paid by a speed
1522 \begin{figure}[htbp]
1524 \includegraphics[width=\columnwidth]{curve_time_bbs_gpu.pdf}
1526 \caption{Quantity of pseudorandom numbers generated per second using the BBS-based PRNG}
1527 \label{fig:time_bbs_gpu}
1530 All these experiments allow us to conclude that it is possible to
1531 generate a very large quantity of pseudorandom numbers statistically perfect with the xor-like version.
1532 To a certain extend, it is also the case with the secure BBS-based version, the speed deflation being
1533 explained by the fact that the former version has ``only''
1534 chaotic properties and statistical perfection, whereas the latter is also cryptographically secure,
1535 as it is shown in the next sections.
1543 \section{Security Analysis}
1544 \label{sec:security analysis}
1548 In this section the concatenation of two strings $u$ and $v$ is classically
1550 In a cryptographic context, a pseudorandom generator is a deterministic
1551 algorithm $G$ transforming strings into strings and such that, for any
1552 seed $s$ of length $m$, $G(s)$ (the output of $G$ on the input $s$) has size
1553 $\ell_G(m)$ with $\ell_G(m)>m$.
1554 The notion of {\it secure} PRNGs can now be defined as follows.
1557 A cryptographic PRNG $G$ is secure if for any probabilistic polynomial time
1558 algorithm $D$, for any positive polynomial $p$, and for all sufficiently
1560 $$| \mathrm{Pr}[D(G(U_m))=1]-Pr[D(U_{\ell_G(m)})=1]|< \frac{1}{p(m)},$$
1561 where $U_r$ is the uniform distribution over $\{0,1\}^r$ and the
1562 probabilities are taken over $U_m$, $U_{\ell_G(m)}$ as well as over the
1563 internal coin tosses of $D$.
1566 Intuitively, it means that there is no polynomial time algorithm that can
1567 distinguish a perfect uniform random generator from $G$ with a non
1568 negligible probability. The interested reader is referred
1569 to~\cite[chapter~3]{Goldreich} for more information. Note that it is
1570 quite easily possible to change the function $\ell$ into any polynomial
1571 function $\ell^\prime$ satisfying $\ell^\prime(m)>m)$~\cite[Chapter 3.3]{Goldreich}.
1573 The generation schema developed in (\ref{equation Oplus}) is based on a
1574 pseudorandom generator. Let $H$ be a cryptographic PRNG. We may assume,
1575 without loss of generality, that for any string $S_0$ of size $N$, the size
1576 of $H(S_0)$ is $kN$, with $k>2$. It means that $\ell_H(N)=kN$.
1577 Let $S_1,\ldots,S_k$ be the
1578 strings of length $N$ such that $H(S_0)=S_1 \ldots S_k$ ($H(S_0)$ is the concatenation of
1579 the $S_i$'s). The cryptographic PRNG $X$ defined in (\ref{equation Oplus})
1580 is the algorithm mapping any string of length $2N$ $x_0S_0$ into the string
1581 $(x_0\oplus S_0 \oplus S_1)(x_0\oplus S_0 \oplus S_1\oplus S_2)\ldots
1582 (x_o\bigoplus_{i=0}^{i=k}S_i)$. One in particular has $\ell_{X}(2N)=kN=\ell_H(N)$.
1583 We claim now that if this PRNG is secure,
1584 then the new one is secure too.
1587 \label{cryptopreuve}
1588 If $H$ is a secure cryptographic PRNG, then $X$ is a secure cryptographic
1593 The proposition is proved by contraposition. Assume that $X$ is not
1594 secure. By Definition, there exists a polynomial time probabilistic
1595 algorithm $D$, a positive polynomial $p$, such that for all $k_0$ there exists
1596 $N\geq \frac{k_0}{2}$ satisfying
1597 $$| \mathrm{Pr}[D(X(U_{2N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)}.$$
1598 We describe a new probabilistic algorithm $D^\prime$ on an input $w$ of size
1601 \item Decompose $w$ into $w=w_1\ldots w_{k}$, where each $w_i$ has size $N$.
1602 \item Pick a string $y$ of size $N$ uniformly at random.
1603 \item Compute $z=(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y
1604 \bigoplus_{i=1}^{i=k} w_i).$
1605 \item Return $D(z)$.
1609 Consider for each $y\in \mathbb{B}^{kN}$ the function $\varphi_{y}$
1610 from $\mathbb{B}^{kN}$ into $\mathbb{B}^{kN}$ mapping $w=w_1\ldots w_k$
1611 (each $w_i$ has length $N$) to
1612 $(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y
1613 \bigoplus_{i=1}^{i=k_1} w_i).$ By construction, one has for every $w$,
1614 \begin{equation}\label{PCH-1}
1615 D^\prime(w)=D(\varphi_y(w)),
1617 where $y$ is randomly generated.
1618 Moreover, for each $y$, $\varphi_{y}$ is injective: if
1619 $(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y\bigoplus_{i=1}^{i=k_1}
1620 w_i)=(y\oplus w_1^\prime)(y\oplus w_1^\prime\oplus w_2^\prime)\ldots
1621 (y\bigoplus_{i=1}^{i=k} w_i^\prime)$, then for every $1\leq j\leq k$,
1622 $y\bigoplus_{i=1}^{i=j} w_i^\prime=y\bigoplus_{i=1}^{i=j} w_i$. It follows,
1623 by a direct induction, that $w_i=w_i^\prime$. Furthermore, since $\mathbb{B}^{kN}$
1624 is finite, each $\varphi_y$ is bijective. Therefore, and using (\ref{PCH-1}),
1626 $\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(\varphi_y(U_{kN}))=1]$ and,
1628 \begin{equation}\label{PCH-2}
1629 \mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(U_{kN})=1].
1632 Now, using (\ref{PCH-1}) again, one has for every $x$,
1633 \begin{equation}\label{PCH-3}
1634 D^\prime(H(x))=D(\varphi_y(H(x))),
1636 where $y$ is randomly generated. By construction, $\varphi_y(H(x))=X(yx)$,
1638 \begin{equation}%\label{PCH-3} %%RAPH : j'ai viré ce label qui existe déjà, il est 3 ligne avant
1639 D^\prime(H(x))=D(yx),
1641 where $y$ is randomly generated.
1644 \begin{equation}\label{PCH-4}
1645 \mathrm{Pr}[D^\prime(H(U_{N}))=1]=\mathrm{Pr}[D(U_{2N})=1].
1647 From (\ref{PCH-2}) and (\ref{PCH-4}), one can deduce that
1648 there exists a polynomial time probabilistic
1649 algorithm $D^\prime$, a positive polynomial $p$, such that for all $k_0$ there exists
1650 $N\geq \frac{k_0}{2}$ satisfying
1651 $$| \mathrm{Pr}[D(H(U_{N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)},$$
1652 proving that $H$ is not secure, which is a contradiction.
1656 \section{Cryptographical Applications}
1658 \subsection{A Cryptographically Secure PRNG for GPU}
1661 It is possible to build a cryptographically secure PRNG based on the previous
1662 algorithm (Algorithm~\ref{algo:gpu_kernel2}). Due to Proposition~\ref{cryptopreuve},
1663 it simply consists in replacing
1664 the {\it xor-like} PRNG by a cryptographically secure one.
1665 We have chosen the Blum Blum Shub generator~\cite{BBS} (usually denoted by BBS) having the form:
1666 $$x_{n+1}=x_n^2~ mod~ M$$ where $M$ is the product of two prime numbers (these
1667 prime numbers need to be congruent to 3 modulus 4). BBS is known to be
1668 very slow and only usable for cryptographic applications.
1671 The modulus operation is the most time consuming operation for current
1672 GPU cards. So in order to obtain quite reasonable performances, it is
1673 required to use only modulus on 32-bits integer numbers. Consequently
1674 $x_n^2$ need to be lesser than $2^{32}$, and thus the number $M$ must be
1675 lesser than $2^{16}$. So in practice we can choose prime numbers around
1676 256 that are congruent to 3 modulus 4. With 32-bits numbers, only the
1677 4 least significant bits of $x_n$ can be chosen (the maximum number of
1678 indistinguishable bits is lesser than or equals to
1679 $log_2(log_2(M))$). In other words, to generate a 32-bits number, we need to use
1680 8 times the BBS algorithm with possibly different combinations of $M$. This
1681 approach is not sufficient to be able to pass all the tests of TestU01,
1682 as small values of $M$ for the BBS lead to
1683 small periods. So, in order to add randomness we have proceeded with
1684 the followings modifications.
1687 Firstly, we define 16 arrangement arrays instead of 2 (as described in
1688 Algorithm \ref{algo:gpu_kernel2}), but only 2 of them are used at each call of
1689 the PRNG kernels. In practice, the selection of combination
1690 arrays to be used is different for all the threads. It is determined
1691 by using the three last bits of two internal variables used by BBS.
1692 %This approach adds more randomness.
1693 In Algorithm~\ref{algo:bbs_gpu},
1694 character \& is for the bitwise AND. Thus using \&7 with a number
1695 gives the last 3 bits, thus providing a number between 0 and 7.
1697 Secondly, after the generation of the 8 BBS numbers for each thread, we
1698 have a 32-bits number whose period is possibly quite small. So
1699 to add randomness, we generate 4 more BBS numbers to
1700 shift the 32-bits numbers, and add up to 6 new bits. This improvement is
1701 described in Algorithm~\ref{algo:bbs_gpu}. In practice, the last 2 bits
1702 of the first new BBS number are used to make a left shift of at most
1703 3 bits. The last 3 bits of the second new BBS number are added to the
1704 strategy whatever the value of the first left shift. The third and the
1705 fourth new BBS numbers are used similarly to apply a new left shift
1708 Finally, as we use 8 BBS numbers for each thread, the storage of these
1709 numbers at the end of the kernel is performed using a rotation. So,
1710 internal variable for BBS number 1 is stored in place 2, internal
1711 variable for BBS number 2 is stored in place 3, ..., and finally, internal
1712 variable for BBS number 8 is stored in place 1.
1717 \KwIn{InternalVarBBSArray: array with internal variables of the 8 BBS
1719 NumThreads: Number of threads\;
1720 array\_comb: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\;
1721 array\_shift[4]=\{0,1,3,7\}\;
1724 \KwOut{NewNb: array containing random numbers in global memory}
1725 \If{threadId is concerned} {
1726 retrieve data from InternalVarBBSArray[threadId] in local variables including shared memory and x\;
1727 we consider that bbs1 ... bbs8 represent the internal states of the 8 BBS numbers\;
1728 offset = threadIdx\%combination\_size\;
1729 o1 = threadIdx-offset+array\_comb[bbs1\&7][offset]\;
1730 o2 = threadIdx-offset+array\_comb[8+bbs2\&7][offset]\;
1737 \tcp{two new shifts}
1738 shift=BBS3(bbs3)\&3\;
1740 t|=BBS1(bbs1)\&array\_shift[shift]\;
1741 shift=BBS7(bbs7)\&3\;
1743 t|=BBS2(bbs2)\&array\_shift[shift]\;
1744 t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
1745 shared\_mem[threadId]=t\;
1746 x = x\textasciicircum t\;
1748 store the new PRNG in NewNb[NumThreads*threadId+i]\;
1750 store internal variables in InternalVarXorLikeArray[threadId] using a rotation\;
1753 \caption{main kernel for the BBS based PRNG GPU}
1754 \label{algo:bbs_gpu}
1757 In Algorithm~\ref{algo:bbs_gpu}, $n$ is for the quantity of random numbers that
1758 a thread has to generate. The operation t<<=4 performs a left shift of 4 bits
1759 on the variable $t$ and stores the result in $t$, and $BBS1(bbs1)\&15$ selects
1760 the last four bits of the result of $BBS1$. Thus an operation of the form
1761 $t<<=4; t|=BBS1(bbs1)\&15\;$ realizes in $t$ a left shift of 4 bits, and then
1762 puts the 4 last bits of $BBS1(bbs1)$ in the four last positions of $t$. Let us
1763 remark that the initialization $t$ is not a necessity as we fill it 4 bits by 4
1764 bits, until having obtained 32-bits. The two last new shifts are realized in
1765 order to enlarge the small periods of the BBS used here, to introduce a kind of
1766 variability. In these operations, we make twice a left shift of $t$ of \emph{at
1767 most} 3 bits, represented by \texttt{shift} in the algorithm, and we put
1768 \emph{exactly} the \texttt{shift} last bits from a BBS into the \texttt{shift}
1769 last bits of $t$. For this, an array named \texttt{array\_shift}, containing the
1770 correspondence between the shift and the number obtained with \texttt{shift} 1
1771 to make the \texttt{and} operation is used. For example, with a left shift of 0,
1772 we make an and operation with 0, with a left shift of 3, we make an and
1773 operation with 7 (represented by 111 in binary mode).
1775 It should be noticed that this generator has once more the form $x^{n+1} = x^n \oplus S^n$,
1776 where $S^n$ is referred in this algorithm as $t$: each iteration of this
1777 PRNG ends with $x = x \wedge t$. This $S^n$ is only constituted
1778 by secure bits produced by the BBS generator, and thus, due to
1779 Proposition~\ref{cryptopreuve}, the resulted PRNG is cryptographically
1785 \subsection{Practical Security Evaluation}
1787 Suppose now that the PRNG will work during
1788 $M=100$ time units, and that during this period,
1789 an attacker can realize $10^{12}$ clock cycles.
1790 We thus wonder whether, during the PRNG's
1791 lifetime, the attacker can distinguish this
1792 sequence from truly random one, with a probability
1793 greater than $\varepsilon = 0.2$.
1794 We consider that $N$ has 900 bits.
1796 The random process is the BBS generator, which
1797 is cryptographically secure. More precisely, it
1798 is $(T,\varepsilon)-$secure: no
1799 $(T,\varepsilon)-$distinguishing attack can be
1800 successfully realized on this PRNG, if~\cite{Fischlin}
1802 T \leqslant \dfrac{L(N)}{6 N (log_2(N))\varepsilon^{-2}M^2}-2^7 N \varepsilon^{-2} M^2 log_2 (8 N \varepsilon^{-1}M)
1804 where $M$ is the length of the output ($M=100$ in
1805 our example), and $L(N)$ is equal to
1807 2.8\times 10^{-3} exp \left(1.9229 \times (N ~ln(2)^\frac{1}{3}) \times ln(N~ln 2)^\frac{2}{3}\right)
1809 is the number of clock cycles to factor a $N-$bit
1812 A direct numerical application shows that this attacker
1813 cannot achieve its $(10^{12},0.2)$ distinguishing
1814 attack in that context.
1818 \subsection{Toward a Cryptographically Secure and Chaotic Asymmetric Cryptosystem}
1819 \label{Blum-Goldwasser}
1820 We finish this research work by giving some thoughts about the use of
1821 the proposed PRNG in an asymmetric cryptosystem.
1822 This first approach will be further investigated in a future work.
1824 \subsubsection{Recalls of the Blum-Goldwasser Probabilistic Cryptosystem}
1826 The Blum-Goldwasser cryptosystem is a cryptographically secure asymmetric key encryption algorithm
1827 proposed in 1984~\cite{Blum:1985:EPP:19478.19501}. The encryption algorithm
1828 implements a XOR-based stream cipher using the BBS PRNG, in order to generate
1829 the keystream. Decryption is done by obtaining the initial seed thanks to
1830 the final state of the BBS generator and the secret key, thus leading to the
1831 reconstruction of the keystream.
1833 The key generation consists in generating two prime numbers $(p,q)$,
1834 randomly and independently of each other, that are
1835 congruent to 3 mod 4, and to compute the modulus $N=pq$.
1836 The public key is $N$, whereas the secret key is the factorization $(p,q)$.
1839 Suppose Bob wishes to send a string $m=(m_0, \dots, m_{L-1})$ of $L$ bits to Alice:
1841 \item Bob picks an integer $r$ randomly in the interval $\llbracket 1,N\rrbracket$ and computes $x_0 = r^2~mod~N$.
1842 \item He uses the BBS to generate the keystream of $L$ pseudorandom bits $(b_0, \dots, b_{L-1})$, as follows. For $i=0$ to $L-1$,
1845 \item While $i \leqslant L-1$:
1847 \item Set $b_i$ equal to the least-significant\footnote{As signaled previously, BBS can securely output up to $\mathsf{N} = \lfloor log(log(N)) \rfloor$ of the least-significant bits of $x_i$ during each round.} bit of $x_i$,
1849 \item $x_i = (x_{i-1})^2~mod~N.$
1852 \item The ciphertext is computed by XORing the plaintext bits $m$ with the keystream: $ c = (c_0, \dots, c_{L-1}) = m \oplus b$. This ciphertext is $[c, y]$, where $y=x_{0}^{2^{L}}~mod~N.$
1856 When Alice receives $\left[(c_0, \dots, c_{L-1}), y\right]$, she can recover $m$ as follows:
1858 \item Using the secret key $(p,q)$, she computes $r_p = y^{((p+1)/4)^{L}}~mod~p$ and $r_q = y^{((q+1)/4)^{L}}~mod~q$.
1859 \item The initial seed can be obtained using the following procedure: $x_0=q(q^{-1}~{mod}~p)r_p + p(p^{-1}~{mod}~q)r_q~{mod}~N$.
1860 \item She recomputes the bit-vector $b$ by using BBS and $x_0$.
1861 \item Alice finally computes the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$.
1865 \subsubsection{Proposal of a new Asymmetric Cryptosystem Adapted from Blum-Goldwasser}
1867 We propose to adapt the Blum-Goldwasser protocol as follows.
1868 Let $\mathsf{N} = \lfloor log(log(N)) \rfloor$ be the number of bits that can
1869 be obtained securely with the BBS generator using the public key $N$ of Alice.
1870 Alice will pick randomly $S^0$ in $\llbracket 0, 2^{\mathsf{N}-1}\rrbracket$ too, and
1871 her new public key will be $(S^0, N)$.
1873 To encrypt his message, Bob will compute
1874 %%RAPH : ici, j'ai mis un simple $
1876 $c = \left(m_0 \oplus (b_0 \oplus S^0), m_1 \oplus (b_0 \oplus b_1 \oplus S^0), \hdots, \right.$
1877 $ \left. m_{L-1} \oplus (b_0 \oplus b_1 \hdots \oplus b_{L-1} \oplus S^0) \right)$
1879 instead of $\left(m_0 \oplus b_0, m_1 \oplus b_1, \hdots, m_{L-1} \oplus b_{L-1} \right)$.
1881 The same decryption stage as in Blum-Goldwasser leads to the sequence
1882 $\left(m_0 \oplus S^0, m_1 \oplus S^0, \hdots, m_{L-1} \oplus S^0 \right)$.
1883 Thus, with a simple use of $S^0$, Alice can obtain the plaintext.
1884 By doing so, the proposed generator is used in place of BBS, leading to
1885 the inheritance of all the properties presented in this paper.
1887 \section{Conclusion}
1890 In this paper, a formerly proposed PRNG based on chaotic iterations
1891 has been generalized to improve its speed. It has been proven to be
1892 chaotic according to Devaney.
1893 Efficient implementations on GPU using xor-like PRNGs as input generators
1894 have shown that a very large quantity of pseudorandom numbers can be generated per second (about
1895 20Gsamples/s), and that these proposed PRNGs succeed to pass the hardest battery in TestU01,
1896 namely the BigCrush.
1897 Furthermore, we have shown that when the inputted generator is cryptographically
1898 secure, then it is the case too for the PRNG we propose, thus leading to
1899 the possibility to develop fast and secure PRNGs using the GPU architecture.
1900 \begin{color}{red} An improvement of the Blum-Goldwasser cryptosystem, making it
1901 behaves chaotically, has finally been proposed. \end{color}
1903 In future work we plan to extend this research, building a parallel PRNG for clusters or
1904 grid computing. Topological properties of the various proposed generators will be investigated,
1905 and the use of other categories of PRNGs as input will be studied too. The improvement
1906 of Blum-Goldwasser will be deepened. Finally, we
1907 will try to enlarge the quantity of pseudorandom numbers generated per second either
1908 in a simulation context or in a cryptographic one.
1912 \bibliographystyle{plain}
1913 \bibliography{mabase}