1 %\documentclass{article}
2 \documentclass[10pt,journal,letterpaper,compsoc]{IEEEtran}
3 \usepackage[utf8]{inputenc}
4 \usepackage[T1]{fontenc}
11 \usepackage[ruled,vlined]{algorithm2e}
13 \usepackage[standard]{ntheorem}
14 \usepackage{algorithmic}
17 % Pour mathds : les ensembles IR, IN, etc.
20 % Pour avoir des intervalles d'entiers
24 % Pour faire des sous-figures dans les figures
25 \usepackage{subfigure}
29 \newtheorem{notation}{Notation}
31 \newcommand{\X}{\mathcal{X}}
32 \newcommand{\Go}{G_{f_0}}
33 \newcommand{\B}{\mathds{B}}
34 \newcommand{\N}{\mathds{N}}
35 \newcommand{\BN}{\mathds{B}^\mathsf{N}}
38 \newcommand{\alert}[1]{\begin{color}{blue}\textit{#1}\end{color}}
40 \title{Efficient and Cryptographically Secure Generation of Chaotic Pseudorandom Numbers on GPU}
43 \author{Jacques M. Bahi, Rapha\"{e}l Couturier, Christophe
44 Guyeux, and Pierre-Cyrille Héam\thanks{Authors in alphabetic order}}
47 \IEEEcompsoctitleabstractindextext{
49 In this paper we present a new pseudorandom number generator (PRNG) on
50 graphics processing units (GPU). This PRNG is based on the so-called chaotic iterations. It
51 is firstly proven to be chaotic according to the Devaney's formulation. We thus propose an efficient
52 implementation for GPU that successfully passes the {\it BigCrush} tests, deemed to be the hardest
53 battery of tests in TestU01. Experiments show that this PRNG can generate
54 about 20 billion of random numbers per second on Tesla C1060 and NVidia GTX280
56 It is then established that, under reasonable assumptions, the proposed PRNG can be cryptographically
58 A chaotic version of the Blum-Goldwasser asymmetric key encryption scheme is finally proposed.
66 \IEEEdisplaynotcompsoctitleabstractindextext
67 \IEEEpeerreviewmaketitle
70 \section{Introduction}
72 Randomness is of importance in many fields such as scientific simulations or cryptography.
73 ``Random numbers'' can mainly be generated either by a deterministic and reproducible algorithm
74 called a pseudorandom number generator (PRNG), or by a physical non-deterministic
75 process having all the characteristics of a random noise, called a truly random number
77 In this paper, we focus on reproducible generators, useful for instance in
78 Monte-Carlo based simulators or in several cryptographic schemes.
79 These domains need PRNGs that are statistically irreproachable.
80 In some fields such as in numerical simulations, speed is a strong requirement
81 that is usually attained by using parallel architectures. In that case,
82 a recurrent problem is that a deflation of the statistical qualities is often
83 reported, when the parallelization of a good PRNG is realized.
84 This is why ad-hoc PRNGs for each possible architecture must be found to
85 achieve both speed and randomness.
86 On the other side, speed is not the main requirement in cryptography: the great
87 need is to define \emph{secure} generators able to withstand malicious
88 attacks. Roughly speaking, an attacker should not be able in practice to make
89 the distinction between numbers obtained with the secure generator and a true random
91 Finally, a small part of the community working in this domain focuses on a
92 third requirement, that is to define chaotic generators.
93 The main idea is to take benefits from a chaotic dynamical system to obtain a
94 generator that is unpredictable, disordered, sensible to its seed, or in other word chaotic.
95 Their desire is to map a given chaotic dynamics into a sequence that seems random
96 and unassailable due to chaos.
97 However, the chaotic maps used as a pattern are defined in the real line
98 whereas computers deal with finite precision numbers.
99 This distortion leads to a deflation of both chaotic properties and speed.
100 Furthermore, authors of such chaotic generators often claim their PRNG
101 as secure due to their chaos properties, but there is no obvious relation
102 between chaos and security as it is understood in cryptography.
103 This is why the use of chaos for PRNG still remains marginal and disputable.
105 The authors' opinion is that topological properties of disorder, as they are
106 properly defined in the mathematical theory of chaos, can reinforce the quality
107 of a PRNG. But they are not substitutable for security or statistical perfection.
108 Indeed, to the authors' mind, such properties can be useful in the two following situations. On the
109 one hand, a post-treatment based on a chaotic dynamical system can be applied
110 to a PRNG statistically deflective, in order to improve its statistical
111 properties. Such an improvement can be found, for instance, in~\cite{bgw09:ip,bcgr11:ip}.
112 On the other hand, chaos can be added to a fast, statistically perfect PRNG and/or a
113 cryptographically secure one, in case where chaos can be of interest,
114 \emph{only if these last properties are not lost during
115 the proposed post-treatment}. Such an assumption is behind this research work.
116 It leads to the attempts to define a
117 family of PRNGs that are chaotic while being fast and statistically perfect,
118 or cryptographically secure.
119 Let us finish this paragraph by noticing that, in this paper,
120 statistical perfection refers to the ability to pass the whole
121 {\it BigCrush} battery of tests, which is widely considered as the most
122 stringent statistical evaluation of a sequence claimed as random.
123 This battery can be found in the well-known TestU01 package~\cite{LEcuyerS07}.
124 Chaos, for its part, refers to the well-established definition of a
125 chaotic dynamical system proposed by Devaney~\cite{Devaney}.
128 In a previous work~\cite{bgw09:ip,guyeux10} we have proposed a post-treatment on PRNGs making them behave
129 as a chaotic dynamical system. Such a post-treatment leads to a new category of
130 PRNGs. We have shown that proofs of Devaney's chaos can be established for this
131 family, and that the sequence obtained after this post-treatment can pass the
132 NIST~\cite{Nist10}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} batteries of tests, even if the inputted generators
134 The proposition of this paper is to improve widely the speed of the formerly
135 proposed generator, without any lack of chaos or statistical properties.
136 In particular, a version of this PRNG on graphics processing units (GPU)
138 Although GPU was initially designed to accelerate
139 the manipulation of images, they are nowadays commonly used in many scientific
140 applications. Therefore, it is important to be able to generate pseudorandom
141 numbers inside a GPU when a scientific application runs in it. This remark
142 motivates our proposal of a chaotic and statistically perfect PRNG for GPU.
144 allows us to generate almost 20 billion of pseudorandom numbers per second.
145 Furthermore, we show that the proposed post-treatment preserves the
146 cryptographical security of the inputted PRNG, when this last has such a
148 Last, but not least, we propose a rewriting of the Blum-Goldwasser asymmetric
149 key encryption protocol by using the proposed method.
151 The remainder of this paper is organized as follows. In Section~\ref{section:related
152 works} we review some GPU implementations of PRNGs. Section~\ref{section:BASIC
153 RECALLS} gives some basic recalls on the well-known Devaney's formulation of chaos,
154 and on an iteration process called ``chaotic
155 iterations'' on which the post-treatment is based.
156 The proposed PRNG and its proof of chaos are given in Section~\ref{sec:pseudorandom}.
157 Section~\ref{sec:efficient PRNG} presents an efficient
158 implementation of this chaotic PRNG on a CPU, whereas Section~\ref{sec:efficient PRNG
159 gpu} describes and evaluates theoretically the GPU implementation.
160 Such generators are experimented in
161 Section~\ref{sec:experiments}.
162 We show in Section~\ref{sec:security analysis} that, if the inputted
163 generator is cryptographically secure, then it is the case too for the
164 generator provided by the post-treatment.
165 Such a proof leads to the proposition of a cryptographically secure and
166 chaotic generator on GPU based on the famous Blum Blum Shub
167 in Section~\ref{sec:CSGPU}, and to an improvement of the
168 Blum-Goldwasser protocol in Sect.~\ref{Blum-Goldwasser}.
169 This research work ends by a conclusion section, in which the contribution is
170 summarized and intended future work is presented.
175 \section{Related works on GPU based PRNGs}
176 \label{section:related works}
178 Numerous research works on defining GPU based PRNGs have already been proposed in the
179 literature, so that exhaustivity is impossible.
180 This is why authors of this document only give reference to the most significant attempts
181 in this domain, from their subjective point of view.
182 The quantity of pseudorandom numbers generated per second is mentioned here
183 only when the information is given in the related work.
184 A million numbers per second will be simply written as
185 1MSample/s whereas a billion numbers per second is 1GSample/s.
187 In \cite{Pang:2008:cec} a PRNG based on cellular automata is defined
188 with no requirement to an high precision integer arithmetic or to any bitwise
189 operations. Authors can generate about
190 3.2MSamples/s on a GeForce 7800 GTX GPU, which is quite an old card now.
191 However, there is neither a mention of statistical tests nor any proof of
192 chaos or cryptography in this document.
194 In \cite{ZRKB10}, the authors propose different versions of efficient GPU PRNGs
195 based on Lagged Fibonacci or Hybrid Taus. They have used these
196 PRNGs for Langevin simulations of biomolecules fully implemented on
197 GPU. Performances of the GPU versions are far better than those obtained with a
198 CPU, and these PRNGs succeed to pass the {\it BigCrush} battery of TestU01.
199 However the evaluations of the proposed PRNGs are only statistical ones.
202 Authors of~\cite{conf/fpga/ThomasHL09} have studied the implementation of some
203 PRNGs on different computing architectures: CPU, field-programmable gate array
204 (FPGA), massively parallel processors, and GPU. This study is of interest, because
205 the performance of the same PRNGs on different architectures are compared.
206 FPGA appears as the fastest and the most
207 efficient architecture, providing the fastest number of generated pseudorandom numbers
209 However, we notice that authors can ``only'' generate between 11 and 16GSamples/s
210 with a GTX 280 GPU, which should be compared with
211 the results presented in this document.
212 We can remark too that the PRNGs proposed in~\cite{conf/fpga/ThomasHL09} are only
213 able to pass the {\it Crush} battery, which is far easier than the {\it Big Crush} one.
215 Lastly, Cuda has developed a library for the generation of pseudorandom numbers called
216 Curand~\cite{curand11}. Several PRNGs are implemented, among
218 Xorwow~\cite{Marsaglia2003} and some variants of Sobol. The tests reported show that
219 their fastest version provides 15GSamples/s on the new Fermi C2050 card.
220 But their PRNGs cannot pass the whole TestU01 battery (only one test is failed).
223 We can finally remark that, to the best of our knowledge, no GPU implementation has been proven to be chaotic, and the cryptographically secure property has surprisingly never been considered.
225 \section{Basic Recalls}
226 \label{section:BASIC RECALLS}
228 This section is devoted to basic definitions and terminologies in the fields of
229 topological chaos and chaotic iterations. We assume the reader is familiar
230 with basic notions on topology (see for instance~\cite{Devaney}).
233 \subsection{Devaney's Chaotic Dynamical Systems}
235 In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and $V_{i}$
236 denotes the $i^{th}$ component of a vector $V$. $f^{k}=f\circ ...\circ f$
237 is for the $k^{th}$ composition of a function $f$. Finally, the following
238 notation is used: $\llbracket1;N\rrbracket=\{1,2,\hdots,N\}$.
241 Consider a topological space $(\mathcal{X},\tau)$ and a continuous function $f :
242 \mathcal{X} \rightarrow \mathcal{X}$.
245 The function $f$ is said to be \emph{topologically transitive} if, for any pair of open sets
246 $U,V \subset \mathcal{X}$, there exists $k>0$ such that $f^k(U) \cap V \neq
251 An element $x$ is a \emph{periodic point} for $f$ of period $n\in \mathds{N}^*$
252 if $f^{n}(x)=x$.% The set of periodic points of $f$ is denoted $Per(f).$
256 $f$ is said to be \emph{regular} on $(\mathcal{X}, \tau)$ if the set of periodic
257 points for $f$ is dense in $\mathcal{X}$: for any point $x$ in $\mathcal{X}$,
258 any neighborhood of $x$ contains at least one periodic point (without
259 necessarily the same period).
263 \begin{definition}[Devaney's formulation of chaos~\cite{Devaney}]
264 The function $f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and
265 topologically transitive.
268 The chaos property is strongly linked to the notion of ``sensitivity'', defined
269 on a metric space $(\mathcal{X},d)$ by:
272 \label{sensitivity} The function $f$ has \emph{sensitive dependence on initial conditions}
273 if there exists $\delta >0$ such that, for any $x\in \mathcal{X}$ and any
274 neighborhood $V$ of $x$, there exist $y\in V$ and $n > 0$ such that
275 $d\left(f^{n}(x), f^{n}(y)\right) >\delta $.
277 The constant $\delta$ is called the \emph{constant of sensitivity} of $f$.
280 Indeed, Banks \emph{et al.} have proven in~\cite{Banks92} that when $f$ is
281 chaotic and $(\mathcal{X}, d)$ is a metric space, then $f$ has the property of
282 sensitive dependence on initial conditions (this property was formerly an
283 element of the definition of chaos). To sum up, quoting Devaney
284 in~\cite{Devaney}, a chaotic dynamical system ``is unpredictable because of the
285 sensitive dependence on initial conditions. It cannot be broken down or
286 simplified into two subsystems which do not interact because of topological
287 transitivity. And in the midst of this random behavior, we nevertheless have an
288 element of regularity''. Fundamentally different behaviors are consequently
289 possible and occur in an unpredictable way.
293 \subsection{Chaotic Iterations}
294 \label{sec:chaotic iterations}
297 Let us consider a \emph{system} with a finite number $\mathsf{N} \in
298 \mathds{N}^*$ of elements (or \emph{cells}), so that each cell has a
299 Boolean \emph{state}. Having $\mathsf{N}$ Boolean values for these
300 cells leads to the definition of a particular \emph{state of the
301 system}. A sequence which elements belong to $\llbracket 1;\mathsf{N}
302 \rrbracket $ is called a \emph{strategy}. The set of all strategies is
303 denoted by $\llbracket 1, \mathsf{N} \rrbracket^\mathds{N}.$
306 \label{Def:chaotic iterations}
307 The set $\mathds{B}$ denoting $\{0,1\}$, let
308 $f:\mathds{B}^{\mathsf{N}}\longrightarrow \mathds{B}^{\mathsf{N}}$ be
309 a function and $S\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ be a ``strategy''. The so-called
310 \emph{chaotic iterations} are defined by $x^0\in
311 \mathds{B}^{\mathsf{N}}$ and
313 \forall n\in \mathds{N}^{\ast }, \forall i\in
314 \llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
316 x_i^{n-1} & \text{ if }S^n\neq i \\
317 \left(f(x^{n-1})\right)_{S^n} & \text{ if }S^n=i.
322 In other words, at the $n^{th}$ iteration, only the $S^{n}-$th cell is
323 \textquotedblleft iterated\textquotedblright . Note that in a more
324 general formulation, $S^n$ can be a subset of components and
325 $\left(f(x^{n-1})\right)_{S^{n}}$ can be replaced by
326 $\left(f(x^{k})\right)_{S^{n}}$, where $k<n$, describing for example,
327 delays transmission~\cite{Robert1986,guyeux10}. Finally, let us remark that
328 the term ``chaotic'', in the name of these iterations, has \emph{a
329 priori} no link with the mathematical theory of chaos, presented above.
332 Let us now recall how to define a suitable metric space where chaotic iterations
333 are continuous. For further explanations, see, e.g., \cite{guyeux10}.
335 Let $\delta $ be the \emph{discrete Boolean metric}, $\delta
336 (x,y)=0\Leftrightarrow x=y.$ Given a function $f$, define the function
337 $F_{f}: \llbracket1;\mathsf{N}\rrbracket\times \mathds{B}^{\mathsf{N}}
338 \longrightarrow \mathds{B}^{\mathsf{N}}$
341 & (k,E) & \longmapsto & \left( E_{j}.\delta (k,j)+ f(E)_{k}.\overline{\delta
342 (k,j)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket}%
345 \noindent where + and . are the Boolean addition and product operations.
346 Consider the phase space:
348 \mathcal{X} = \llbracket 1 ; \mathsf{N} \rrbracket^\mathds{N} \times
349 \mathds{B}^\mathsf{N},
351 \noindent and the map defined on $\mathcal{X}$:
353 G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), \label{Gf}
355 \noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
356 (S^{n})_{n\in \mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow (S^{n+1})_{n\in
357 \mathds{N}}\in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}$ and $i$ is the \emph{initial function}
358 $i:(S^{n})_{n\in \mathds{N}} \in \llbracket 1, \mathsf{N} \rrbracket^\mathds{N}\longrightarrow S^{0}\in \llbracket
359 1;\mathsf{N}\rrbracket$. Then the chaotic iterations proposed in
360 Definition \ref{Def:chaotic iterations} can be described by the following iterations:
364 X^0 \in \mathcal{X} \\
370 With this formulation, a shift function appears as a component of chaotic
371 iterations. The shift function is a famous example of a chaotic
372 map~\cite{Devaney} but its presence is not sufficient enough to claim $G_f$ as
374 To study this claim, a new distance between two points $X = (S,E), Y =
375 (\check{S},\check{E})\in
376 \mathcal{X}$ has been introduced in \cite{guyeux10} as follows:
378 d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
384 \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
385 }\delta (E_{k},\check{E}_{k})}, \\
386 \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
387 \sum_{k=1}^{\infty }\dfrac{|S^k-\check{S}^k|}{10^{k}}}.%
393 This new distance has been introduced to satisfy the following requirements.
395 \item When the number of different cells between two systems is increasing, then
396 their distance should increase too.
397 \item In addition, if two systems present the same cells and their respective
398 strategies start with the same terms, then the distance between these two points
399 must be small because the evolution of the two systems will be the same for a
400 while. Indeed, both dynamical systems start with the same initial condition,
401 use the same update function, and as strategies are the same for a while, furthermore
402 updated components are the same as well.
404 The distance presented above follows these recommendations. Indeed, if the floor
405 value $\lfloor d(X,Y)\rfloor $ is equal to $n$, then the systems $E, \check{E}$
406 differ in $n$ cells ($d_e$ is indeed the Hamming distance). In addition, $d(X,Y) - \lfloor d(X,Y) \rfloor $ is a
407 measure of the differences between strategies $S$ and $\check{S}$. More
408 precisely, this floating part is less than $10^{-k}$ if and only if the first
409 $k$ terms of the two strategies are equal. Moreover, if the $k^{th}$ digit is
410 nonzero, then the $k^{th}$ terms of the two strategies are different.
411 The impact of this choice for a distance will be investigated at the end of the document.
413 Finally, it has been established in \cite{guyeux10} that,
416 Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. Then $G_{f}$ is continuous in
417 the metric space $(\mathcal{X},d)$.
420 The chaotic property of $G_f$ has been firstly established for the vectorial
421 Boolean negation $f(x_1,\hdots, x_\mathsf{N}) = (\overline{x_1},\hdots, \overline{x_\mathsf{N}})$ \cite{guyeux10}. To obtain a characterization, we have secondly
422 introduced the notion of asynchronous iteration graph recalled bellow.
424 Let $f$ be a map from $\mathds{B}^\mathsf{N}$ to itself. The
425 {\emph{asynchronous iteration graph}} associated with $f$ is the
426 directed graph $\Gamma(f)$ defined by: the set of vertices is
427 $\mathds{B}^\mathsf{N}$; for all $x\in\mathds{B}^\mathsf{N}$ and
428 $i\in \llbracket1;\mathsf{N}\rrbracket$,
429 the graph $\Gamma(f)$ contains an arc from $x$ to $F_f(i,x)$.
430 The relation between $\Gamma(f)$ and $G_f$ is clear: there exists a
431 path from $x$ to $x'$ in $\Gamma(f)$ if and only if there exists a
432 strategy $s$ such that the parallel iteration of $G_f$ from the
433 initial point $(s,x)$ reaches the point $x'$.
434 We have then proven in \cite{bcgr11:ip} that,
438 \label{Th:Caractérisation des IC chaotiques}
439 Let $f:\mathds{B}^\mathsf{N}\to\mathds{B}^\mathsf{N}$. $G_f$ is chaotic (according to Devaney)
440 if and only if $\Gamma(f)$ is strongly connected.
443 Finally, we have established in \cite{bcgr11:ip} that,
445 Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its
446 iteration graph, $\check{M}$ its adjacency
448 a $n\times n$ matrix defined by
450 M_{ij} = \frac{1}{n}\check{M}_{ij}$ %\textrm{
452 $M_{ii} = 1 - \frac{1}{n} \sum\limits_{j=1, j\neq i}^n \check{M}_{ij}$ otherwise.
454 If $\Gamma(f)$ is strongly connected, then
455 the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows
456 a law that tends to the uniform distribution
457 if and only if $M$ is a double stochastic matrix.
461 These results of chaos and uniform distribution have led us to study the possibility of building a
462 pseudorandom number generator (PRNG) based on the chaotic iterations.
463 As $G_f$, defined on the domain $\llbracket 1 ; \mathsf{N} \rrbracket^{\mathds{N}}
464 \times \mathds{B}^\mathsf{N}$, is built from Boolean networks $f : \mathds{B}^\mathsf{N}
465 \rightarrow \mathds{B}^\mathsf{N}$, we can preserve the theoretical properties on $G_f$
466 during implementations (due to the discrete nature of $f$). Indeed, it is as if
467 $\mathds{B}^\mathsf{N}$ represents the memory of the computer whereas $\llbracket 1 ; \mathsf{N}
468 \rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance, in PRNG, or a physical noise in TRNG).
469 Let us finally remark that the vectorial negation satisfies the hypotheses of both theorems above.
471 \section{Application to Pseudorandomness}
472 \label{sec:pseudorandom}
474 \subsection{A First Pseudorandom Number Generator}
476 We have proposed in~\cite{bgw09:ip} a new family of generators that receives
477 two PRNGs as inputs. These two generators are mixed with chaotic iterations,
478 leading thus to a new PRNG that improves the statistical properties of each
479 generator taken alone. Furthermore, our generator
480 possesses various chaos properties that none of the generators used as input
484 \begin{algorithm}[h!]
486 \KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$
488 \KwOut{a configuration $x$ ($n$ bits)}
490 $k\leftarrow b + \textit{XORshift}(b)$\;
493 $s\leftarrow{\textit{XORshift}(n)}$\;
494 $x\leftarrow{F_f(s,x)}$\;
498 \caption{PRNG with chaotic functions}
505 \begin{algorithm}[h!]
507 \KwIn{the internal configuration $z$ (a 32-bit word)}
508 \KwOut{$y$ (a 32-bit word)}
509 $z\leftarrow{z\oplus{(z\ll13)}}$\;
510 $z\leftarrow{z\oplus{(z\gg17)}}$\;
511 $z\leftarrow{z\oplus{(z\ll5)}}$\;
515 \caption{An arbitrary round of \textit{XORshift} algorithm}
523 This generator is synthesized in Algorithm~\ref{CI Algorithm}.
524 It takes as input: a Boolean function $f$ satisfying Theorem~\ref{Th:Caractérisation des IC chaotiques};
525 an integer $b$, ensuring that the number of executed iterations is at least $b$
526 and at most $2b+1$; and an initial configuration $x^0$.
527 It returns the new generated configuration $x$. Internally, it embeds two
528 \textit{XORshift}$(k)$ PRNGs~\cite{Marsaglia2003} that return integers
529 uniformly distributed
530 into $\llbracket 1 ; k \rrbracket$.
531 \textit{XORshift} is a category of very fast PRNGs designed by George Marsaglia,
532 which repeatedly uses the transform of exclusive or (XOR, $\oplus$) on a number
533 with a bit shifted version of it. This PRNG, which has a period of
534 $2^{32}-1=4.29\times10^9$, is summed up in Algorithm~\ref{XORshift}. It is used
535 in our PRNG to compute the strategy length and the strategy elements.
537 This former generator has successively passed various batteries of statistical tests, as the NIST~\cite{bcgr11:ip}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} ones.
539 \subsection{Improving the Speed of the Former Generator}
541 Instead of updating only one cell at each iteration, we can try to choose a
542 subset of components and to update them together. Such an attempt leads
543 to a kind of merger of the two sequences used in Algorithm
544 \ref{CI Algorithm}. When the updating function is the vectorial negation,
545 this algorithm can be rewritten as follows:
550 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
551 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus S^n,
554 \label{equation Oplus}
556 where $\oplus$ is for the bitwise exclusive or between two integers.
557 This rewriting can be understood as follows. The $n-$th term $S^n$ of the
558 sequence $S$, which is an integer of $\mathsf{N}$ binary digits, presents
559 the list of cells to update in the state $x^n$ of the system (represented
560 as an integer having $\mathsf{N}$ bits too). More precisely, the $k-$th
561 component of this state (a binary digit) changes if and only if the $k-$th
562 digit in the binary decomposition of $S^n$ is 1.
564 The single basic component presented in Eq.~\ref{equation Oplus} is of
565 ordinary use as a good elementary brick in various PRNGs. It corresponds
566 to the following discrete dynamical system in chaotic iterations:
569 \forall n\in \mathds{N}^{\ast }, \forall i\in
570 \llbracket1;\mathsf{N}\rrbracket ,x_i^n=\left\{
572 x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
573 \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
577 where $f$ is the vectorial negation and $\forall n \in \mathds{N}$,
578 $\mathcal{S}^n \subset \llbracket 1, \mathsf{N} \rrbracket$ is such that
579 $k \in \mathcal{S}^n$ if and only if the $k-$th digit in the binary
580 decomposition of $S^n$ is 1. Such chaotic iterations are more general
581 than the ones presented in Definition \ref{Def:chaotic iterations} because, instead of updating only one term at each iteration,
582 we select a subset of components to change.
585 Obviously, replacing Algorithm~\ref{CI Algorithm} by
586 Equation~\ref{equation Oplus}, which is possible when the iteration function is
587 the vectorial negation, leads to a speed improvement. However, proofs
588 of chaos obtained in~\cite{bg10:ij} have been established
589 only for chaotic iterations of the form presented in Definition
590 \ref{Def:chaotic iterations}. The question is now to determine whether the
591 use of more general chaotic iterations to generate pseudorandom numbers
592 faster, does not deflate their topological chaos properties.
594 \subsection{Proofs of Chaos of the General Formulation of the Chaotic Iterations}
596 Let us consider the discrete dynamical systems in chaotic iterations having
597 the general form: $\forall n\in \mathds{N}^{\ast }$, $ \forall i\in
598 \llbracket1;\mathsf{N}\rrbracket $,
603 x_i^{n-1} & \text{ if } i \notin \mathcal{S}^n \\
604 \left(f(x^{n-1})\right)_{S^n} & \text{ if }i \in \mathcal{S}^n.
609 In other words, at the $n^{th}$ iteration, only the cells whose id is
610 contained into the set $S^{n}$ are iterated.
612 Let us now rewrite these general chaotic iterations as usual discrete dynamical
613 system of the form $X^{n+1}=f(X^n)$ on an ad hoc metric space. Such a formulation
614 is required in order to study the topological behavior of the system.
616 Let us introduce the following function:
619 \chi: & \llbracket 1; \mathsf{N} \rrbracket \times \mathcal{P}\left(\llbracket 1; \mathsf{N} \rrbracket\right) & \longrightarrow & \mathds{B}\\
620 & (i,X) & \longmapsto & \left\{ \begin{array}{ll} 0 & \textrm{if }i \notin X, \\ 1 & \textrm{if }i \in X, \end{array}\right.
623 where $\mathcal{P}\left(X\right)$ is for the powerset of the set $X$, that is, $Y \in \mathcal{P}\left(X\right) \Longleftrightarrow Y \subset X$.
625 Given a function $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, define the function:
626 $F_{f}: \mathcal{P}\left(\llbracket1;\mathsf{N}\rrbracket \right) \times \mathds{B}^{\mathsf{N}}
627 \longrightarrow \mathds{B}^{\mathsf{N}}$
630 (P,E) & \longmapsto & \left( E_{j}.\chi (j,P)+f(E)_{j}.\overline{\chi(j,P)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket}%
633 where + and . are the Boolean addition and product operations, and $\overline{x}$
634 is the negation of the Boolean $x$.
635 Consider the phase space:
637 \mathcal{X} = \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N} \times
638 \mathds{B}^\mathsf{N},
640 \noindent and the map defined on $\mathcal{X}$:
642 G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), %\label{Gf} %%RAPH, j'ai viré ce label qui existe déjà avant...
644 \noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
645 (S^{n})_{n\in \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow (S^{n+1})_{n\in
646 \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}$ and $i$ is the \emph{initial function}
647 $i:(S^{n})_{n\in \mathds{N}} \in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow S^{0}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)$.
648 Then the general chaotic iterations defined in Equation \ref{general CIs} can
649 be described by the following discrete dynamical system:
653 X^0 \in \mathcal{X} \\
659 Once more, a shift function appears as a component of these general chaotic
662 To study the Devaney's chaos property, a distance between two points
663 $X = (S,E), Y = (\check{S},\check{E})$ of $\mathcal{X}$ must be defined.
666 d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
669 \noindent where $ \displaystyle{d_{e}(E,\check{E})} = \displaystyle{\sum_{k=1}^{\mathsf{N}%
670 }\delta (E_{k},\check{E}_{k})}$ is once more the Hamming distance, and
671 $ \displaystyle{d_{s}(S,\check{S})} = \displaystyle{\dfrac{9}{\mathsf{N}}%
672 \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}$,
673 %%RAPH : ici, j'ai supprimé tous les sauts à la ligne
676 %% \begin{array}{lll}
677 %% \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
678 %% }\delta (E_{k},\check{E}_{k})} \textrm{ is once more the Hamming distance}, \\
679 %% \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
680 %% \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}.%
684 where $|X|$ is the cardinality of a set $X$ and $A\Delta B$ is for the symmetric difference, defined for sets A, B as
685 $A\,\Delta\,B = (A \setminus B) \cup (B \setminus A)$.
689 The function $d$ defined in Eq.~\ref{nouveau d} is a metric on $\mathcal{X}$.
693 $d_e$ is the Hamming distance. We will prove that $d_s$ is a distance
694 too, thus $d$, as being the sum of two distances, will also be a distance.
696 \item Obviously, $d_s(S,\check{S})\geqslant 0$, and if $S=\check{S}$, then
697 $d_s(S,\check{S})=0$. Conversely, if $d_s(S,\check{S})=0$, then
698 $\forall k \in \mathds{N}, |S^k\Delta {S}^k|=0$, and so $\forall k, S^k=\check{S}^k$.
699 \item $d_s$ is symmetric
700 ($d_s(S,\check{S})=d_s(\check{S},S)$) due to the commutative property
701 of the symmetric difference.
702 \item Finally, $|S \Delta S''| = |(S \Delta \varnothing) \Delta S''|= |S \Delta (S'\Delta S') \Delta S''|= |(S \Delta S') \Delta (S' \Delta S'')|\leqslant |S \Delta S'| + |S' \Delta S''|$,
703 and so for all subsets $S,S',$ and $S''$ of $\llbracket 1, \mathsf{N} \rrbracket$,
704 we have $d_s(S,S'') \leqslant d_e(S,S')+d_s(S',S'')$, and the triangle
705 inequality is obtained.
710 Before being able to study the topological behavior of the general
711 chaotic iterations, we must first establish that:
714 For all $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, the function $G_f$ is continuous on
715 $\left( \mathcal{X},d\right)$.
720 We use the sequential continuity.
721 Let $(S^n,E^n)_{n\in \mathds{N}}$ be a sequence of the phase space $%
722 \mathcal{X}$, which converges to $(S,E)$. We will prove that $\left(
723 G_{f}(S^n,E^n)\right) _{n\in \mathds{N}}$ converges to $\left(
724 G_{f}(S,E)\right) $. Let us remark that for all $n$, $S^n$ is a strategy,
725 thus, we consider a sequence of strategies (\emph{i.e.}, a sequence of
727 As $d((S^n,E^n);(S,E))$ converges to 0, each distance $d_{e}(E^n,E)$ and $d_{s}(S^n,S)$ converges
728 to 0. But $d_{e}(E^n,E)$ is an integer, so $\exists n_{0}\in \mathds{N},$ $%
729 d_{e}(E^n,E)=0$ for any $n\geqslant n_{0}$.\newline
730 In other words, there exists a threshold $n_{0}\in \mathds{N}$ after which no
731 cell will change its state:
732 $\exists n_{0}\in \mathds{N},n\geqslant n_{0}\Rightarrow E^n = E.$
734 In addition, $d_{s}(S^n,S)\longrightarrow 0,$ so $\exists n_{1}\in %
735 \mathds{N},d_{s}(S^n,S)<10^{-1}$ for all indexes greater than or equal to $%
736 n_{1}$. This means that for $n\geqslant n_{1}$, all the $S^n$ have the same
737 first term, which is $S^0$: $\forall n\geqslant n_{1},S_0^n=S_0.$
739 Thus, after the $max(n_{0},n_{1})^{th}$ term, states of $E^n$ and $E$ are
740 identical and strategies $S^n$ and $S$ start with the same first term.\newline
741 Consequently, states of $G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are equal,
742 so, after the $max(n_0, n_1)^{th}$ term, the distance $d$ between these two points is strictly less than 1.\newline
743 \noindent We now prove that the distance between $\left(
744 G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is convergent to
745 0. Let $\varepsilon >0$. \medskip
747 \item If $\varepsilon \geqslant 1$, we see that the distance
748 between $\left( G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is
749 strictly less than 1 after the $max(n_{0},n_{1})^{th}$ term (same state).
751 \item If $\varepsilon <1$, then $\exists k\in \mathds{N},10^{-k}\geqslant
752 \varepsilon > 10^{-(k+1)}$. But $d_{s}(S^n,S)$ converges to 0, so
754 \exists n_{2}\in \mathds{N},\forall n\geqslant
755 n_{2},d_{s}(S^n,S)<10^{-(k+2)},
757 thus after $n_{2}$, the $k+2$ first terms of $S^n$ and $S$ are equal.
759 \noindent As a consequence, the $k+1$ first entries of the strategies of $%
760 G_{f}(S^n,E^n)$ and $G_{f}(S,E)$ are the same ($G_{f}$ is a shift of strategies) and due to the definition of $d_{s}$, the floating part of
761 the distance between $(S^n,E^n)$ and $(S,E)$ is strictly less than $%
762 10^{-(k+1)}\leqslant \varepsilon $.
765 %%RAPH : ici j'ai rajouté une ligne
767 \forall \varepsilon >0,$ $\exists N_{0}=max(n_{0},n_{1},n_{2})\in \mathds{N}
768 ,$ $\forall n\geqslant N_{0},$
769 $ d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right)
770 \leqslant \varepsilon .
772 $G_{f}$ is consequently continuous.
776 It is now possible to study the topological behavior of the general chaotic
777 iterations. We will prove that,
780 \label{t:chaos des general}
781 The general chaotic iterations defined on Equation~\ref{general CIs} satisfy
782 the Devaney's property of chaos.
785 Let us firstly prove the following lemma.
787 \begin{lemma}[Strong transitivity]
789 For all couples $X,Y \in \mathcal{X}$ and any neighborhood $V$ of $X$, we can
790 find $n \in \mathds{N}^*$ and $X' \in V$ such that $G^n(X')=Y$.
794 Let $X=(S,E)$, $\varepsilon>0$, and $k_0 = \lfloor log_{10}(\varepsilon)+1 \rfloor$.
795 Any point $X'=(S',E')$ such that $E'=E$ and $\forall k \leqslant k_0, S'^k=S^k$,
796 are in the open ball $\mathcal{B}\left(X,\varepsilon\right)$. Let us define
797 $\check{X} = \left(\check{S},\check{E}\right)$, where $\check{X}= G^{k_0}(X)$.
798 We denote by $s\subset \llbracket 1; \mathsf{N} \rrbracket$ the set of coordinates
799 that are different between $\check{E}$ and the state of $Y$. Thus each point $X'$ of
800 the form $(S',E')$ where $E'=E$ and $S'$ starts with
801 $(S^0, S^1, \hdots, S^{k_0},s,\hdots)$, verifies the following properties:
803 \item $X'$ is in $\mathcal{B}\left(X,\varepsilon\right)$,
804 \item the state of $G_f^{k_0+1}(X')$ is the state of $Y$.
806 Finally the point $\left(\left(S^0, S^1, \hdots, S^{k_0},s,s^0, s^1, \hdots\right); E\right)$,
807 where $(s^0,s^1, \hdots)$ is the strategy of $Y$, satisfies the properties
808 claimed in the lemma.
811 We can now prove the Theorem~\ref{t:chaos des general}.
813 \begin{proof}[Theorem~\ref{t:chaos des general}]
814 Firstly, strong transitivity implies transitivity.
816 Let $(S,E) \in\mathcal{X}$ and $\varepsilon >0$. To
817 prove that $G_f$ is regular, it is sufficient to prove that
818 there exists a strategy $\tilde S$ such that the distance between
819 $(\tilde S,E)$ and $(S,E)$ is less than $\varepsilon$, and such that
820 $(\tilde S,E)$ is a periodic point.
822 Let $t_1=\lfloor-\log_{10}(\varepsilon)\rfloor$, and let $E'$ be the
823 configuration that we obtain from $(S,E)$ after $t_1$ iterations of
824 $G_f$. As $G_f$ is strongly transitive, there exists a strategy $S'$
825 and $t_2\in\mathds{N}$ such
826 that $E$ is reached from $(S',E')$ after $t_2$ iterations of $G_f$.
828 Consider the strategy $\tilde S$ that alternates the first $t_1$ terms
829 of $S$ and the first $t_2$ terms of $S'$:
830 %%RAPH : j'ai coupé la ligne en 2
832 S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,$$$$\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It
833 is clear that $(\tilde S,E)$ is obtained from $(\tilde S,E)$ after
834 $t_1+t_2$ iterations of $G_f$. So $(\tilde S,E)$ is a periodic
835 point. Since $\tilde S_t=S_t$ for $t<t_1$, by the choice of $t_1$, we
836 have $d((S,E),(\tilde S,E))<\epsilon$.
841 \section{Improving Statistical Properties Using Chaotic Iterations}
844 \subsection{The CIPRNG family}
846 Three categories of PRNGs have been derived from chaotic iterations. They are
847 recalled in what follows.
849 \subsubsection{Old CIPRNG}
851 Let $\mathsf{N} = 4$. Some chaotic iterations are fulfilled to generate a sequence $\left(x^n\right)_{n\in\mathds{N}} \in \left(\mathds{B}^4\right)^\mathds{N}$ of Boolean vectors: the successive states of the iterated system. Some of these vectors are randomly extracted and their components constitute our pseudorandom bit flow~\cite{bgw09:ip}.
852 Chaotic iterations are realized as follows. Initial state $x^0 \in \mathds{B}^4$ is a Boolean vector taken as a seed and chaotic strategy $\left(S^n\right)_{n\in\mathds{N}}\in \llbracket 1, 4 \rrbracket^\mathds{N}$ is constructed with $PRNG_2$. Lastly, iterate function $f$ is the vectorial Boolean negation.
853 At each iteration, only the $S^n$-th component of state $x^n$ is updated. Finally, some $x^n$ are selected by a sequence $m^n$, provided by a second generator $PRNG_1$, as the pseudorandom bit sequence of our generator.
855 The basic design procedure of the Old CI generator is summed up in Algorithm~\ref{Chaotic iteration}.
856 The internal state is $x$, the output array is $r$. $a$ and $b$ are those computed by $PRNG_1$ and $PRNG_2$.
860 \textbf{Input:} the internal state $x$ (an array of 4-bit words)\\
861 \textbf{Output:} an array $r$ of 4-bit words
862 \begin{algorithmic}[1]
864 \STATE$a\leftarrow{PRNG_1()}$;
865 \STATE$m\leftarrow{a~mod~2+13}$;
866 \WHILE{$i=0,\dots,m$}
867 \STATE$b\leftarrow{PRNG_2()}$;
868 \STATE$S\leftarrow{b~mod~4}$;
869 \STATE$x_S\leftarrow{ \overline{x_S}}$;
871 \STATE$r\leftarrow{x}$;
874 \caption{An arbitrary round of the old CI generator}
875 \label{Chaotic iteration}
879 \subsubsection{New CIPRNG}
881 The New CI generator is designed by the following process~\cite{bg10:ip}. First of all, some chaotic iterations have to be done to generate a sequence $\left(x^n\right)_{n\in\mathds{N}} \in \left(\mathds{B}^{32}\right)^\mathds{N}$ of Boolean vectors, which are the successive states of the iterated system. Some of these vectors will be randomly extracted and our pseudo-random bit flow will be constituted by their components. Such chaotic iterations are realized as follows. Initial state $x^0 \in \mathds{B}^{32}$ is a Boolean vector taken as a seed and chaotic strategy $\left(S^n\right)_{n\in\mathds{N}}\in \llbracket 1, 32 \rrbracket^\mathds{N}$ is
882 an \emph{irregular decimation} of $PRNG_2$ sequence, as described in Algorithm~\ref{Chaotic iteration1}.
884 Another time, at each iteration, only the $S^n$-th component of state $x^n$ is updated, as follows: $x_i^n = x_i^{n-1}$ if $i \neq S^n$, else $x_i^n = \overline{x_i^{n-1}}$.
885 Finally, some $x^n$ are selected
886 by a sequence $m^n$ as the pseudo-random bit sequence of our generator.
887 $(m^n)_{n \in \mathds{N}} \in \mathcal{M}^\mathds{N}$ is computed from $PRNG_1$, where $\mathcal{M}\subset \mathds{N}^*$ is a finite nonempty set of integers.
889 The basic design procedure of the New CI generator is summarized in Algorithm~\ref{Chaotic iteration1}.
890 The internal state is $x$, the output state is $r$. $a$ and $b$ are those computed by the two input
891 PRNGs. Lastly, the value $g_1(a)$ is an integer defined as in Eq.~\ref{Formula}.
898 0 \text{ if }0 \leqslant{y^n}<{C^0_{32}},\\
899 1 \text{ if }{C^0_{32}} \leqslant{y^n}<\sum_{i=0}^1{C^i_{32}},\\
900 2 \text{ if }\sum_{i=0}^1{C^i_{32}} \leqslant{y^n}<\sum_{i=0}^2{C^i_{32}},\\
901 \vdots~~~~~ ~~\vdots~~~ ~~~~\\
902 N \text{ if }\sum_{i=0}^{N-1}{C^i_{32}}\leqslant{y^n}<1.\\
908 \textbf{Input:} the internal state $x$ (32 bits)\\
909 \textbf{Output:} a state $r$ of 32 bits
910 \begin{algorithmic}[1]
913 \STATE$d_i\leftarrow{0}$\;
916 \STATE$a\leftarrow{PRNG_1()}$\;
917 \STATE$m\leftarrow{f(a)}$\;
918 \STATE$k\leftarrow{m}$\;
919 \WHILE{$i=0,\dots,k$}
921 \STATE$b\leftarrow{PRNG_2()~mod~\mathsf{N}}$\;
922 \STATE$S\leftarrow{b}$\;
925 \STATE $x_S\leftarrow{ \overline{x_S}}$\;
926 \STATE $d_S\leftarrow{1}$\;
931 \STATE $k\leftarrow{ k+1}$\;
934 \STATE $r\leftarrow{x}$\;
937 \caption{An arbitrary round of the new CI generator}
938 \label{Chaotic iteration1}
943 \subsubsection{Xor CIPRNG}
945 Instead of updating only one cell at each iteration as Old CI and New CI, we can try to choose a
946 subset of components and to update them together. Such an attempt leads
947 to a kind of merger of the two random sequences. When the updating function is the vectorial negation,
948 this algorithm can be rewritten as follows~\cite{arxivRCCGPCH}:
953 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
954 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus S^n,
957 \label{equation Oplus}
959 %This rewriting can be understood as follows. The $n-$th term $S^n$ of the
960 %sequence $S$, which is an integer of $\mathsf{N}$ binary digits, presents
961 %the list of cells to update in the state $x^n$ of the system (represented
962 %as an integer having $\mathsf{N}$ bits too). More precisely, the $k-$th
963 %component of this state (a binary digit) changes if and only if the $k-$th
964 %digit in the binary decomposition of $S^n$ is 1.
966 The single basic component presented in Eq.~\ref{equation Oplus} is of
967 ordinary use as a good elementary brick in various PRNGs. It corresponds
968 to the discrete dynamical system in chaotic iterations.
970 \subsection{About some Well-known PRNGs}
971 \label{The generation of pseudo-random sequence}
976 Let us now give illustration on the fact that chaos appears to improve statistical properties.
978 \subsection{Details of some Existing Generators}
980 Here are the modules of PRNGs we have chosen to experiment.
983 This PRNG implements either the simple or the combined linear congruency generator (LCGs). The simple LCG is defined by the recurrence:
985 x^n = (ax^{n-1} + c)~mod~m
988 where $a$, $c$, and $x^0$ must be, among other things, non-negative and less than $m$~\cite{testU01}. In what follows, 2LCGs and 3LCGs refer as two (resp. three) combinations of such LCGs.
989 For further details, see~\cite{combined_lcg}.
992 This module implements multiple recursive generators (MRGs), based on a linear recurrence of order $k$, modulo $m$~\cite{testU01}:
994 x^n = (a^1x^{n-1}+~...~+a^kx^{n-k})~mod~m
997 Combination of two MRGs (referred as 2MRGs) is also be used in this paper.
999 \subsubsection{UCARRY}
1000 Generators based on linear recurrences with carry are implemented in this module. This includes the add-with-carry (AWC) generator, based on the recurrence:
1004 x^n = (x^{n-r} + x^{n-s} + c^{n-1})~mod~m, \\
1005 c^n= (x^{n-r} + x^{n-s} + c^{n-1}) / m, \end{array}\end{equation}
1006 the SWB generator, having the recurrence:
1010 x^n = (x^{n-r} - x^{n-s} - c^{n-1})~mod~m, \\
1013 1 ~~~~~\text{if}~ (x^{i-r} - x^{i-s} - c^{i-1})<0\\
1014 0 ~~~~~\text{else},\end{array} \right. \end{array}\end{equation}
1015 and the SWC generator designed by R. Couture, which is based on the following recurrence:
1019 x^n = (a^1x^{n-1} \oplus ~...~ \oplus a^rx^{n-r} \oplus c^{n-1}) ~ mod ~ 2^w, \\
1020 c^n = (a^1x^{n-1} \oplus ~...~ \oplus a^rx^{n-r} \oplus c^{n-1}) ~ / ~ 2^w. \end{array}\end{equation}
1022 \subsubsection{GFSR}
1023 This module implements the generalized feedback shift register (GFSR) generator, that is:
1025 x^n = x^{n-r} \oplus x^{n-k}
1031 Finally, this module implements the nonlinear inversive generator, as defined in~\cite{testU01}, which is:
1038 (a^1 + a^2 / z^{n-1})~mod~m & \text{if}~ z^{n-1} \neq 0 \\
1039 a^1 & \text{if}~ z^{n-1} = 0 .\end{array} \right. \end{array}\end{equation}
1045 \subsection{Statistical tests}
1046 \label{Security analysis}
1048 %A theoretical proof for the randomness of a generator is impossible to give, therefore statistical inference based on observed sample sequences produced by the generator seems to be the best option.
1049 Considering the properties of binary random sequences, various statistical tests can be designed to evaluate the assertion that the sequence is generated by a perfectly random source. We have performed some statistical tests for the CIPRNGs proposed here. These tests include NIST suite~\cite{ANDREW2008} and DieHARD battery of tests~\cite{DieHARD}. For completeness and for reference, we give in the following subsection a brief description of each of the aforementioned tests.
1053 \subsubsection{NIST statistical tests suite}
1055 Among the numerous standard tests for pseudo-randomness, a convincing way to show the randomness of the produced sequences is to confront them to the NIST (National Institute of Standards and Technology) statistical tests, being an up-to-date tests suite proposed by the Information Technology Laboratory (ITL). A new version of the Statistical tests suite has been released in August 11, 2010.
1057 The NIST tests suite SP 800-22 is a statistical package consisting of 15 tests. They were developed to test the randomness of binary sequences produced by hardware or software based cryptographic pseudorandom number generators. These tests focus on a variety of different types of non-randomness that could exist in a sequence.
1059 For each statistical test, a set of $P-values$ (corresponding to the set of sequences) is produced.
1060 The interpretation of empirical results can be conducted in various ways.
1061 In this paper, the examination of the distribution of P-values to check for uniformity ($ P-value_{T}$) is used.
1062 The distribution of $P-values$ is examined to ensure uniformity.
1063 If $P-value_{T} \geqslant 0.0001$, then the sequences can be considered to be uniformly distributed.
1065 In our experiments, 100 sequences (s = 100), each with 1,000,000-bit long, are generated and tested. If the $P-value_{T}$ of any test is smaller than 0.0001, the sequences are considered to be not good enough and the generating algorithm is not suitable for usage.
1071 \subsubsection{DieHARD battery of tests}
1072 The DieHARD battery of tests has been the most sophisticated standard for over a decade. Because of the stringent requirements in the DieHARD tests suite, a generator passing this battery of
1073 tests can be considered good as a rule of thumb.
1075 The DieHARD battery of tests consists of 18 different independent statistical tests. This collection
1076 of tests is based on assessing the randomness of bits comprising 32-bit integers obtained from
1077 a random number generator. Each test requires $2^{23}$ 32-bit integers in order to run the full set
1078 of tests. Most of the tests in DieHARD return a $P-value$, which should be uniform on $[0,1)$ if the input file
1079 contains truly independent random bits. These $P-values$ are obtained by
1080 $P=F(X)$, where $F$ is the assumed distribution of the sample random variable $X$ (often normal).
1081 But that assumed $F$ is just an asymptotic approximation, for which the fit will be worst
1082 in the tails. Thus occasional $P-values$ near 0 or 1, such as 0.0012 or 0.9983, can occur.
1083 An individual test is considered to be failed if the $P-value$ approaches 1 closely, for example $P>0.9999$.
1086 \subsection{Results and discussion}
1087 \label{Results and discussion}
1089 \renewcommand{\arraystretch}{1.3}
1090 \caption{NIST and DieHARD tests suite passing rates for PRNGs without CI}
1091 \label{NIST and DieHARD tests suite passing rate the for PRNGs without CI}
1093 \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c|}
1095 Types of PRNGs & \multicolumn{2}{c|}{Linear PRNGs} & \multicolumn{4}{c|}{Lagged PRNGs} & \multicolumn{1}{c|}{ICG PRNGs} & \multicolumn{3}{c|}{Mixed PRNGs}\\ \hline
1096 \backslashbox{\textbf{$Tests$}} {\textbf{$PRNG$}} & LCG& MRG& AWC & SWB & SWC & GFSR & INV & LCG2& LCG3& MRG2 \\ \hline
1097 NIST & 11/15 & 14/15 &\textbf{15/15} & \textbf{15/15} & 14/15 & 14/15 & 14/15 & 14/15& 14/15& 14/15 \\ \hline
1098 DieHARD & 16/18 & 16/18 & 15/18 & 16/18 & \textbf{18/18} & 16/18 & 16/18 & 16/18& 16/18& 16/18\\ \hline
1102 Table~\ref{NIST and DieHARD tests suite passing rate the for PRNGs without CI} shows the results on the batteries recalled above, indicating that almost all the PRNGs cannot pass all their tests. In other words, the statistical quality of these PRNGs cannot fulfill the up-to-date standards presented previously. We will show that the CIPRNG can solve this issue.
1104 To illustrate the effects of this CIPRNG in detail, experiments will be divided in three parts:
1106 \item \textbf{Single CIPRNG}: The PRNGs involved in CI computing are of the same category.
1107 \item \textbf{Mixed CIPRNG}: Two different types of PRNGs are mixed during the chaotic iterations process.
1108 \item \textbf{Multiple CIPRNG}: The generator is obtained by repeating the composition of the iteration function as follows: $x^0\in \mathds{B}^{\mathsf{N}}$, and $\forall n\in \mathds{N}^{\ast },\forall i\in \llbracket1;\mathsf{N}\rrbracket,$
1113 x_i^{n-1}~~~~~\text{if}~S^n\neq i \\
1114 \forall j\in \llbracket1;\mathsf{m}\rrbracket,f^m(x^{n-1})_{S^{nm+j}}~\text{if}~S^{nm+j}=i.\end{array} \right. \end{array}
1116 $m$ is called the \emph{functional power}.
1120 We have performed statistical analysis of each of the aforementioned CIPRNGs.
1121 The results are reproduced in Tables~\ref{NIST and DieHARD tests suite passing rate the for PRNGs without CI} and \ref{NIST and DieHARD tests suite passing rate the for single CIPRNGs}.
1122 The scores written in boldface indicate that all the tests have been passed successfully, whereas an asterisk ``*'' means that the considered passing rate has been improved.
1124 \subsubsection{Tests based on the Single CIPRNG}
1127 \renewcommand{\arraystretch}{1.3}
1128 \caption{NIST and DieHARD tests suite passing rates for PRNGs with CI}
1129 \label{NIST and DieHARD tests suite passing rate the for single CIPRNGs}
1131 \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c|c|c|}
1133 Types of PRNGs & \multicolumn{2}{c|}{Linear PRNGs} & \multicolumn{4}{c|}{Lagged PRNGs} & \multicolumn{1}{c|}{ICG PRNGs} & \multicolumn{3}{c|}{Mixed PRNGs}\\ \hline
1134 \backslashbox{\textbf{$Tests$}} {\textbf{$Single~CIPRNG$}} & LCG & MRG & AWC & SWB & SWC & GFSR & INV& LCG2 & LCG3& MRG2 \\ \hline\hline
1135 Old CIPRNG\\ \hline \hline
1136 NIST & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} & \textbf{15/15} & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} *& \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} \\ \hline
1137 DieHARD & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} & \textbf{18/18} * & \textbf{18/18} *& \textbf{18/18} * & \textbf{18/18} *& \textbf{18/18} * \\ \hline
1138 New CIPRNG\\ \hline \hline
1139 NIST & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} & \textbf{15/15} & \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} *& \textbf{15/15} * & \textbf{15/15} * & \textbf{15/15} \\ \hline
1140 DieHARD & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} * & \textbf{18/18} *& \textbf{18/18} *\\ \hline
1141 Xor CIPRNG\\ \hline\hline
1142 NIST & 14/15*& \textbf{15/15} * & \textbf{15/15} & \textbf{15/15} & 14/15 & \textbf{15/15} * & 14/15& \textbf{15/15} * & \textbf{15/15} *& \textbf{15/15} \\ \hline
1143 DieHARD & 16/18 & 16/18 & 17/18* & \textbf{18/18} * & \textbf{18/18} & \textbf{18/18} * & 16/18 & 16/18 & 16/18& 16/18\\ \hline
1147 The statistical tests results of the PRNGs using the single CIPRNG method are given in Table~\ref{NIST and DieHARD tests suite passing rate the for single CIPRNGs}.
1148 We can observe that, except for the Xor CIPRNG, all of the CIPRNGs have passed the 15 tests of the NIST battery and the 18 tests of the DieHARD one.
1149 Moreover, considering these scores, we can deduce that both the single Old CIPRNG and the single New CIPRNG are relatively steadier than the single Xor CIPRNG approach, when applying them to different PRNGs.
1150 However, the Xor CIPRNG is obviously the fastest approach to generate a CI random sequence, and it still improves the statistical properties relative to each generator taken alone, although the test values are not as good as desired.
1152 Therefore, all of these three ways are interesting, for different reasons, in the production of pseudorandom numbers and,
1153 on the whole, the single CIPRNG method can be considered to adapt to or improve all kinds of PRNGs.
1155 To have a realization of the Xor CIPRNG that can pass all the tests embedded into the NIST battery, the Xor CIPRNG with multiple functional powers are investigated in Section~\ref{Tests based on Multiple CIPRNG}.
1158 \subsubsection{Tests based on the Mixed CIPRNG}
1160 To compare the previous approach with the CIPRNG design that uses a Mixed CIPRNG, we have taken into account the same inputted generators than in the previous section.
1161 These inputted couples $(PRNG_1,PRNG_2)$ of PRNGs are used in the Mixed approach as follows:
1165 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
1166 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus PRNG_1\oplus PRNG_2,
1169 \label{equation Oplus}
1172 With this Mixed CIPRNG approach, both the Old CIPRNG and New CIPRNG continue to pass all the NIST and DieHARD suites.
1173 In addition, we can see that the PRNGs using a Xor CIPRNG approach can pass more tests than previously.
1174 The main reason of this success is that the Mixed Xor CIPRNG has a longer period.
1175 Indeed, let $n_{P}$ be the period of a PRNG $P$, then the period deduced from the single Xor CIPRNG approach is obviously equal to:
1180 n_{P}&\text{if~}x^0=x^{n_{P}}\\
1181 2n_{P}&\text{if~}x^0\neq x^{n_{P}}.\\
1184 \label{equation Oplus}
1187 Let us now denote by $n_{P1}$ and $n_{P2}$ the periods of respectively the $PRNG_1$ and $PRNG_2$ generators, then the period of the Mixed Xor CIPRNG will be:
1192 LCM(n_{P1},n_{P2})&\text{if~}x^0=x^{LCM(n_{P1},n_{P2})}\\
1193 2LCM(n_{P1},n_{P2})&\text{if~}x^0\neq x^{LCM(n_{P1},n_{P2})}.\\
1196 \label{equation Oplus}
1199 In Table~\ref{DieHARD fail mixex CIPRNG}, we only show the results for the Mixed CIPRNGs that cannot pass all DieHARD suites (the NIST tests are all passed). It demonstrates that Mixed Xor CIPRNG involving LCG, MRG, LCG2, LCG3, MRG2, or INV cannot pass the two following tests, namely the ``Matrix Rank 32x32'' and the ``COUNT-THE-1's'' tests contained into the DieHARD battery. Let us recall their definitions:
1202 \item \textbf{Matrix Rank 32x32.} A random 32x32 binary matrix is formed, each row having a 32-bit random vector. Its rank is an integer that ranges from 0 to 32. Ranks less than 29 must be rare, and their occurences must be pooled with those of rank 29. To achieve the test, ranks of 40,000 such random matrices are obtained, and a chisquare test is performed on counts for ranks 32,31,30 and for ranks $\leq29$.
1204 \item \textbf{COUNT-THE-1's TEST} Consider the file under test as a stream of bytes (four per 2 bit integer). Each byte can contain from 0 to 8 1's, with probabilities 1,8,28,56,70,56,28,8,1 over 256. Now let the stream of bytes provide a string of overlapping 5-letter words, each ``letter'' taking values A,B,C,D,E. The letters are determined by the number of 1's in a byte: 0,1, or 2 yield A, 3 yields B, 4 yields C, 5 yields D and 6,7, or 8 yield E. Thus we have a monkey at a typewriter hitting five keys with various probabilities (37,56,70,56,37 over 256). There are $5^5$ possible 5-letter words, and from a string of 256,000 (over-lapping) 5-letter words, counts are made on the frequencies for each word. The quadratic form in the weak inverse of the covariance matrix of the cell counts provides a chisquare test: Q5-Q4, the difference of the naive Pearson sums of $(OBS-EXP)^2/EXP$ on counts for 5- and 4-letter cell counts.
1207 The reason of these fails is that the output of LCG, LCG2, LCG3, MRG, and MRG2 under the experiments are in 31-bit. Compare with the Single CIPRNG, using different PRNGs to build CIPRNG seems more efficient in improving random number quality (mixed Xor CI can 100\% pass NIST, but single cannot).
1210 \renewcommand{\arraystretch}{1.3}
1211 \caption{Scores of mixed Xor CIPRNGs when considering the DieHARD battery}
1212 \label{DieHARD fail mixex CIPRNG}
1214 \begin{tabular}{|l||c|c|c|c|c|c|}
1216 \backslashbox{\textbf{$PRNG_1$}} {\textbf{$PRNG_0$}} & LCG & MRG & INV & LCG2 & LCG3 & MRG2 \\ \hline\hline
1217 LCG &\backslashbox{} {} &16/18&16/18 &16/18 &16/18 &16/18\\ \hline
1218 MRG &16/18 &\backslashbox{} {} &16/18&16/18 &16/18 &16/18\\ \hline
1219 INV &16/18 &16/18&\backslashbox{} {} &16/18 &16/18&16/18 \\ \hline
1220 LCG2 &16/18 &16/18 &16/18 &\backslashbox{} {} &16/18&16/18\\ \hline
1221 LCG3 &16/18 &16/18 &16/18&16/18&\backslashbox{} {} &16/18\\ \hline
1222 MRG2 &16/18 &16/18 &16/18&16/18 &16/18 &\backslashbox{} {} \\ \hline
1226 \subsubsection{Tests based on the Multiple CIPRNG}
1227 \label{Tests based on Multiple CIPRNG}
1229 Until now, the combination of at most two input PRNGs has been investigated.
1230 We now regard the possibility to use a larger number of generators to improve the statistics of the generated pseudorandom numbers, leading to the multiple functional power approach.
1231 For the CIPRNGs which have already pass both the NIST and DieHARD suites with 2 inputted PRNGs (all the Old and New CIPRNGs, and some of the Xor CIPRNGs), it is not meaningful to consider their adaption of this multiple CIPRNG method, hence only the Multiple Xor CIPRNGs, having the following form, will be investigated.
1235 x^0 \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket, S \in \llbracket 0, 2^\mathsf{N}-1 \rrbracket^\mathds{N} \\
1236 \forall n \in \mathds{N}^*, x^n = x^{n-1} \oplus S^{nm}\oplus S^{nm+1}\ldots \oplus S^{nm+m-1} ,
1239 \label{equation Oplus}
1242 The question is now to determine the value of the threshold $m$ (the functional power) making the multiple CIPRNG being able to pass the whole NIST battery.
1243 Such a question is answered in Table~\ref{threshold}.
1247 \renewcommand{\arraystretch}{1.3}
1248 \caption{Functional power $m$ making it possible to pass the whole NIST battery}
1251 \begin{tabular}{|l||c|c|c|c|c|c|c|c|}
1253 Inputted $PRNG$ & LCG & MRG & SWC & GFSR & INV& LCG2 & LCG3 & MRG2 \\ \hline\hline
1254 Threshold value $m$& 19 & 7 & 2& 1 & 11& 9& 3& 4\\ \hline\hline
1258 \subsubsection{Results Summary}
1260 We can summarize the obtained results as follows.
1262 \item The CIPRNG method is able to improve the statistical properties of a large variety of PRNGs.
1263 \item Using different PRNGs in the CIPRNG approach is better than considering several instances of one unique PRNG.
1264 \item The statistical quality of the outputs increases with the functional power $m$.
1269 \section{Efficient PRNG based on Chaotic Iterations}
1270 \label{sec:efficient PRNG}
1272 Based on the proof presented in the previous section, it is now possible to
1273 improve the speed of the generator formerly presented in~\cite{bgw09:ip,guyeux10}.
1274 The first idea is to consider
1275 that the provided strategy is a pseudorandom Boolean vector obtained by a
1277 An iteration of the system is simply the bitwise exclusive or between
1278 the last computed state and the current strategy.
1279 Topological properties of disorder exhibited by chaotic
1280 iterations can be inherited by the inputted generator, we hope by doing so to
1281 obtain some statistical improvements while preserving speed.
1283 %%RAPH : j'ai viré tout ca
1284 %% Let us give an example using 16-bits numbers, to clearly understand how the bitwise xor operations
1287 %% Suppose that $x$ and the strategy $S^i$ are given as
1289 %% Table~\ref{TableExemple} shows the result of $x \oplus S^i$.
1292 %% \begin{scriptsize}
1294 %% \begin{array}{|cc|cccccccccccccccc|}
1296 %% x &=&1&0&1&1&1&0&1&0&1&0&0&1&0&0&1&0\\
1298 %% S^i &=&0&1&1&0&0&1&1&0&1&1&1&0&0&1&1&1\\
1300 %% x \oplus S^i&=&1&1&0&1&1&1&0&0&0&1&1&1&0&1&0&1\\
1307 %% \caption{Example of an arbitrary round of the proposed generator}
1308 %% \label{TableExemple}
1314 \lstset{language=C,caption={C code of the sequential PRNG based on chaotic iterations},label=algo:seqCIPRNG}
1318 unsigned int CIPRNG() {
1319 static unsigned int x = 123123123;
1320 unsigned long t1 = xorshift();
1321 unsigned long t2 = xor128();
1322 unsigned long t3 = xorwow();
1323 x = x^(unsigned int)t1;
1324 x = x^(unsigned int)(t2>>32);
1325 x = x^(unsigned int)(t3>>32);
1326 x = x^(unsigned int)t2;
1327 x = x^(unsigned int)(t1>>32);
1328 x = x^(unsigned int)t3;
1336 In Listing~\ref{algo:seqCIPRNG} a sequential version of the proposed PRNG based
1337 on chaotic iterations is presented. The xor operator is represented by
1338 \textasciicircum. This function uses three classical 64-bits PRNGs, namely the
1339 \texttt{xorshift}, the \texttt{xor128}, and the
1340 \texttt{xorwow}~\cite{Marsaglia2003}. In the following, we call them ``xor-like
1341 PRNGs''. As each xor-like PRNG uses 64-bits whereas our proposed generator
1342 works with 32-bits, we use the command \texttt{(unsigned int)}, that selects the
1343 32 least significant bits of a given integer, and the code \texttt{(unsigned
1344 int)(t$>>$32)} in order to obtain the 32 most significant bits of \texttt{t}.
1346 Thus producing a pseudorandom number needs 6 xor operations with 6 32-bits numbers
1347 that are provided by 3 64-bits PRNGs. This version successfully passes the
1348 stringent BigCrush battery of tests~\cite{LEcuyerS07}.
1350 \section{Efficient PRNGs based on Chaotic Iterations on GPU}
1351 \label{sec:efficient PRNG gpu}
1353 In order to take benefits from the computing power of GPU, a program
1354 needs to have independent blocks of threads that can be computed
1355 simultaneously. In general, the larger the number of threads is, the
1356 more local memory is used, and the less branching instructions are
1357 used (if, while, ...), the better the performances on GPU is.
1358 Obviously, having these requirements in mind, it is possible to build
1359 a program similar to the one presented in Listing
1360 \ref{algo:seqCIPRNG}, which computes pseudorandom numbers on GPU. To
1361 do so, we must firstly recall that in the CUDA~\cite{Nvid10}
1362 environment, threads have a local identifier called
1363 \texttt{ThreadIdx}, which is relative to the block containing
1364 them. Furthermore, in CUDA, parts of the code that are executed by the GPU, are
1365 called {\it kernels}.
1368 \subsection{Naive Version for GPU}
1371 It is possible to deduce from the CPU version a quite similar version adapted to GPU.
1372 The simple principle consists in making each thread of the GPU computing the CPU version of our PRNG.
1373 Of course, the three xor-like
1374 PRNGs used in these computations must have different parameters.
1375 In a given thread, these parameters are
1376 randomly picked from another PRNGs.
1377 The initialization stage is performed by the CPU.
1378 To do it, the ISAAC PRNG~\cite{Jenkins96} is used to set all the
1379 parameters embedded into each thread.
1381 The implementation of the three
1382 xor-like PRNGs is straightforward when their parameters have been
1383 allocated in the GPU memory. Each xor-like works with an internal
1384 number $x$ that saves the last generated pseudorandom number. Additionally, the
1385 implementation of the xor128, the xorshift, and the xorwow respectively require
1386 4, 5, and 6 unsigned long as internal variables.
1391 \KwIn{InternalVarXorLikeArray: array with internal variables of the 3 xor-like
1392 PRNGs in global memory\;
1393 NumThreads: number of threads\;}
1394 \KwOut{NewNb: array containing random numbers in global memory}
1395 \If{threadIdx is concerned by the computation} {
1396 retrieve data from InternalVarXorLikeArray[threadIdx] in local variables\;
1398 compute a new PRNG as in Listing\ref{algo:seqCIPRNG}\;
1399 store the new PRNG in NewNb[NumThreads*threadIdx+i]\;
1401 store internal variables in InternalVarXorLikeArray[threadIdx]\;
1404 \caption{Main kernel of the GPU ``naive'' version of the PRNG based on chaotic iterations}
1405 \label{algo:gpu_kernel}
1410 Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of the proposed PRNG on
1411 GPU. Due to the available memory in the GPU and the number of threads
1412 used simultaneously, the number of random numbers that a thread can generate
1413 inside a kernel is limited (\emph{i.e.}, the variable \texttt{n} in
1414 algorithm~\ref{algo:gpu_kernel}). For instance, if $100,000$ threads are used and
1415 if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)},
1416 then the memory required to store all of the internals variables of both the xor-like
1417 PRNGs\footnote{we multiply this number by $2$ in order to count 32-bits numbers}
1418 and the pseudorandom numbers generated by our PRNG, is equal to $100,000\times ((4+5+6)\times
1419 2+(1+100))=1,310,000$ 32-bits numbers, that is, approximately $52$Mb.
1421 This generator is able to pass the whole BigCrush battery of tests, for all
1422 the versions that have been tested depending on their number of threads
1423 (called \texttt{NumThreads} in our algorithm, tested up to $5$ million).
1426 The proposed algorithm has the advantage of manipulating independent
1427 PRNGs, so this version is easily adaptable on a cluster of computers too. The only thing
1428 to ensure is to use a single ISAAC PRNG. To achieve this requirement, a simple solution consists in
1429 using a master node for the initialization. This master node computes the initial parameters
1430 for all the different nodes involved in the computation.
1433 \subsection{Improved Version for GPU}
1435 As GPU cards using CUDA have shared memory between threads of the same block, it
1436 is possible to use this feature in order to simplify the previous algorithm,
1437 i.e., to use less than 3 xor-like PRNGs. The solution consists in computing only
1438 one xor-like PRNG by thread, saving it into the shared memory, and then to use the results
1439 of some other threads in the same block of threads. In order to define which
1440 thread uses the result of which other one, we can use a combination array that
1441 contains the indexes of all threads and for which a combination has been
1444 In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used. The
1445 variable \texttt{offset} is computed using the value of
1446 \texttt{combination\_size}. Then we can compute \texttt{o1} and \texttt{o2}
1447 representing the indexes of the other threads whose results are used by the
1448 current one. In this algorithm, we consider that a 32-bits xor-like PRNG has
1449 been chosen. In practice, we use the xor128 proposed in~\cite{Marsaglia2003} in
1450 which unsigned longs (64 bits) have been replaced by unsigned integers (32
1453 This version can also pass the whole {\it BigCrush} battery of tests.
1457 \KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs
1459 NumThreads: Number of threads\;
1460 array\_comb1, array\_comb2: Arrays containing combinations of size combination\_size\;}
1462 \KwOut{NewNb: array containing random numbers in global memory}
1463 \If{threadId is concerned} {
1464 retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory and x\;
1465 offset = threadIdx\%combination\_size\;
1466 o1 = threadIdx-offset+array\_comb1[offset]\;
1467 o2 = threadIdx-offset+array\_comb2[offset]\;
1470 t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
1471 shared\_mem[threadId]=t\;
1472 x = x\textasciicircum t\;
1474 store the new PRNG in NewNb[NumThreads*threadId+i]\;
1476 store internal variables in InternalVarXorLikeArray[threadId]\;
1479 \caption{Main kernel for the chaotic iterations based PRNG GPU efficient
1481 \label{algo:gpu_kernel2}
1484 \subsection{Theoretical Evaluation of the Improved Version}
1486 A run of Algorithm~\ref{algo:gpu_kernel2} consists in an operation ($x=x\oplus t$) having
1487 the form of Equation~\ref{equation Oplus}, which is equivalent to the iterative
1488 system of Eq.~\ref{eq:generalIC}. That is, an iteration of the general chaotic
1489 iterations is realized between the last stored value $x$ of the thread and a strategy $t$
1490 (obtained by a bitwise exclusive or between a value provided by a xor-like() call
1491 and two values previously obtained by two other threads).
1492 To be certain that we are in the framework of Theorem~\ref{t:chaos des general},
1493 we must guarantee that this dynamical system iterates on the space
1494 $\mathcal{X} = \mathcal{P}\left(\llbracket 1, \mathsf{N} \rrbracket\right)^\mathds{N}\times\mathds{B}^\mathsf{N}$.
1495 The left term $x$ obviously belongs to $\mathds{B}^ \mathsf{N}$.
1496 To prevent from any flaws of chaotic properties, we must check that the right
1497 term (the last $t$), corresponding to the strategies, can possibly be equal to any
1498 integer of $\llbracket 1, \mathsf{N} \rrbracket$.
1500 Such a result is obvious, as for the xor-like(), all the
1501 integers belonging into its interval of definition can occur at each iteration, and thus the
1502 last $t$ respects the requirement. Furthermore, it is possible to
1503 prove by an immediate mathematical induction that, as the initial $x$
1504 is uniformly distributed (it is provided by a cryptographically secure PRNG),
1505 the two other stored values shmem[o1] and shmem[o2] are uniformly distributed too,
1506 (this is the induction hypothesis), and thus the next $x$ is finally uniformly distributed.
1508 Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general
1509 chaotic iterations presented previously, and for this reason, it satisfies the
1510 Devaney's formulation of a chaotic behavior.
1512 \section{Experiments}
1513 \label{sec:experiments}
1515 Different experiments have been performed in order to measure the generation
1516 speed. We have used a first computer equipped with a Tesla C1060 NVidia GPU card
1518 Intel Xeon E5530 cadenced at 2.40 GHz, and
1519 a second computer equipped with a smaller CPU and a GeForce GTX 280.
1521 cards have 240 cores.
1523 In Figure~\ref{fig:time_xorlike_gpu} we compare the quantity of pseudorandom numbers
1524 generated per second with various xor-like based PRNGs. In this figure, the optimized
1525 versions use the {\it xor64} described in~\cite{Marsaglia2003}, whereas the naive versions
1526 embed the three xor-like PRNGs described in Listing~\ref{algo:seqCIPRNG}. In
1527 order to obtain the optimal performances, the storage of pseudorandom numbers
1528 into the GPU memory has been removed. This step is time consuming and slows down the numbers
1529 generation. Moreover this storage is completely
1530 useless, in case of applications that consume the pseudorandom
1531 numbers directly after generation. We can see that when the number of threads is greater
1532 than approximately 30,000 and lower than 5 million, the number of pseudorandom numbers generated
1533 per second is almost constant. With the naive version, this value ranges from 2.5 to
1534 3GSamples/s. With the optimized version, it is approximately equal to
1535 20GSamples/s. Finally we can remark that both GPU cards are quite similar, but in
1536 practice, the Tesla C1060 has more memory than the GTX 280, and this memory
1537 should be of better quality.
1538 As a comparison, Listing~\ref{algo:seqCIPRNG} leads to the generation of about
1539 138MSample/s when using one core of the Xeon E5530.
1541 \begin{figure}[htbp]
1543 \includegraphics[width=\columnwidth]{curve_time_xorlike_gpu.pdf}
1545 \caption{Quantity of pseudorandom numbers generated per second with the xorlike-based PRNG}
1546 \label{fig:time_xorlike_gpu}
1553 In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized
1554 BBS-based PRNG on GPU. On the Tesla C1060 we obtain approximately 700MSample/s
1555 and on the GTX 280 about 670MSample/s, which is obviously slower than the
1556 xorlike-based PRNG on GPU. However, we will show in the next sections that this
1557 new PRNG has a strong level of security, which is necessarily paid by a speed
1560 \begin{figure}[htbp]
1562 \includegraphics[width=\columnwidth]{curve_time_bbs_gpu.pdf}
1564 \caption{Quantity of pseudorandom numbers generated per second using the BBS-based PRNG}
1565 \label{fig:time_bbs_gpu}
1568 All these experiments allow us to conclude that it is possible to
1569 generate a very large quantity of pseudorandom numbers statistically perfect with the xor-like version.
1570 To a certain extend, it is also the case with the secure BBS-based version, the speed deflation being
1571 explained by the fact that the former version has ``only''
1572 chaotic properties and statistical perfection, whereas the latter is also cryptographically secure,
1573 as it is shown in the next sections.
1581 \section{Security Analysis}
1582 \label{sec:security analysis}
1586 In this section the concatenation of two strings $u$ and $v$ is classically
1588 In a cryptographic context, a pseudorandom generator is a deterministic
1589 algorithm $G$ transforming strings into strings and such that, for any
1590 seed $s$ of length $m$, $G(s)$ (the output of $G$ on the input $s$) has size
1591 $\ell_G(m)$ with $\ell_G(m)>m$.
1592 The notion of {\it secure} PRNGs can now be defined as follows.
1595 A cryptographic PRNG $G$ is secure if for any probabilistic polynomial time
1596 algorithm $D$, for any positive polynomial $p$, and for all sufficiently
1598 $$| \mathrm{Pr}[D(G(U_m))=1]-Pr[D(U_{\ell_G(m)})=1]|< \frac{1}{p(m)},$$
1599 where $U_r$ is the uniform distribution over $\{0,1\}^r$ and the
1600 probabilities are taken over $U_m$, $U_{\ell_G(m)}$ as well as over the
1601 internal coin tosses of $D$.
1604 Intuitively, it means that there is no polynomial time algorithm that can
1605 distinguish a perfect uniform random generator from $G$ with a non
1606 negligible probability. The interested reader is referred
1607 to~\cite[chapter~3]{Goldreich} for more information. Note that it is
1608 quite easily possible to change the function $\ell$ into any polynomial
1609 function $\ell^\prime$ satisfying $\ell^\prime(m)>m)$~\cite[Chapter 3.3]{Goldreich}.
1611 The generation schema developed in (\ref{equation Oplus}) is based on a
1612 pseudorandom generator. Let $H$ be a cryptographic PRNG. We may assume,
1613 without loss of generality, that for any string $S_0$ of size $N$, the size
1614 of $H(S_0)$ is $kN$, with $k>2$. It means that $\ell_H(N)=kN$.
1615 Let $S_1,\ldots,S_k$ be the
1616 strings of length $N$ such that $H(S_0)=S_1 \ldots S_k$ ($H(S_0)$ is the concatenation of
1617 the $S_i$'s). The cryptographic PRNG $X$ defined in (\ref{equation Oplus})
1618 is the algorithm mapping any string of length $2N$ $x_0S_0$ into the string
1619 $(x_0\oplus S_0 \oplus S_1)(x_0\oplus S_0 \oplus S_1\oplus S_2)\ldots
1620 (x_o\bigoplus_{i=0}^{i=k}S_i)$. One in particular has $\ell_{X}(2N)=kN=\ell_H(N)$.
1621 We claim now that if this PRNG is secure,
1622 then the new one is secure too.
1625 \label{cryptopreuve}
1626 If $H$ is a secure cryptographic PRNG, then $X$ is a secure cryptographic
1631 The proposition is proved by contraposition. Assume that $X$ is not
1632 secure. By Definition, there exists a polynomial time probabilistic
1633 algorithm $D$, a positive polynomial $p$, such that for all $k_0$ there exists
1634 $N\geq \frac{k_0}{2}$ satisfying
1635 $$| \mathrm{Pr}[D(X(U_{2N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)}.$$
1636 We describe a new probabilistic algorithm $D^\prime$ on an input $w$ of size
1639 \item Decompose $w$ into $w=w_1\ldots w_{k}$, where each $w_i$ has size $N$.
1640 \item Pick a string $y$ of size $N$ uniformly at random.
1641 \item Compute $z=(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y
1642 \bigoplus_{i=1}^{i=k} w_i).$
1643 \item Return $D(z)$.
1647 Consider for each $y\in \mathbb{B}^{kN}$ the function $\varphi_{y}$
1648 from $\mathbb{B}^{kN}$ into $\mathbb{B}^{kN}$ mapping $w=w_1\ldots w_k$
1649 (each $w_i$ has length $N$) to
1650 $(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y
1651 \bigoplus_{i=1}^{i=k_1} w_i).$ By construction, one has for every $w$,
1652 \begin{equation}\label{PCH-1}
1653 D^\prime(w)=D(\varphi_y(w)),
1655 where $y$ is randomly generated.
1656 Moreover, for each $y$, $\varphi_{y}$ is injective: if
1657 $(y\oplus w_1)(y\oplus w_1\oplus w_2)\ldots (y\bigoplus_{i=1}^{i=k_1}
1658 w_i)=(y\oplus w_1^\prime)(y\oplus w_1^\prime\oplus w_2^\prime)\ldots
1659 (y\bigoplus_{i=1}^{i=k} w_i^\prime)$, then for every $1\leq j\leq k$,
1660 $y\bigoplus_{i=1}^{i=j} w_i^\prime=y\bigoplus_{i=1}^{i=j} w_i$. It follows,
1661 by a direct induction, that $w_i=w_i^\prime$. Furthermore, since $\mathbb{B}^{kN}$
1662 is finite, each $\varphi_y$ is bijective. Therefore, and using (\ref{PCH-1}),
1664 $\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(\varphi_y(U_{kN}))=1]$ and,
1666 \begin{equation}\label{PCH-2}
1667 \mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(U_{kN})=1].
1670 Now, using (\ref{PCH-1}) again, one has for every $x$,
1671 \begin{equation}\label{PCH-3}
1672 D^\prime(H(x))=D(\varphi_y(H(x))),
1674 where $y$ is randomly generated. By construction, $\varphi_y(H(x))=X(yx)$,
1676 \begin{equation}%\label{PCH-3} %%RAPH : j'ai viré ce label qui existe déjà, il est 3 ligne avant
1677 D^\prime(H(x))=D(yx),
1679 where $y$ is randomly generated.
1682 \begin{equation}\label{PCH-4}
1683 \mathrm{Pr}[D^\prime(H(U_{N}))=1]=\mathrm{Pr}[D(U_{2N})=1].
1685 From (\ref{PCH-2}) and (\ref{PCH-4}), one can deduce that
1686 there exists a polynomial time probabilistic
1687 algorithm $D^\prime$, a positive polynomial $p$, such that for all $k_0$ there exists
1688 $N\geq \frac{k_0}{2}$ satisfying
1689 $$| \mathrm{Pr}[D(H(U_{N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)},$$
1690 proving that $H$ is not secure, which is a contradiction.
1694 \section{Cryptographical Applications}
1696 \subsection{A Cryptographically Secure PRNG for GPU}
1699 It is possible to build a cryptographically secure PRNG based on the previous
1700 algorithm (Algorithm~\ref{algo:gpu_kernel2}). Due to Proposition~\ref{cryptopreuve},
1701 it simply consists in replacing
1702 the {\it xor-like} PRNG by a cryptographically secure one.
1703 We have chosen the Blum Blum Shub generator~\cite{BBS} (usually denoted by BBS) having the form:
1704 $$x_{n+1}=x_n^2~ mod~ M$$ where $M$ is the product of two prime numbers (these
1705 prime numbers need to be congruent to 3 modulus 4). BBS is known to be
1706 very slow and only usable for cryptographic applications.
1709 The modulus operation is the most time consuming operation for current
1710 GPU cards. So in order to obtain quite reasonable performances, it is
1711 required to use only modulus on 32-bits integer numbers. Consequently
1712 $x_n^2$ need to be lesser than $2^{32}$, and thus the number $M$ must be
1713 lesser than $2^{16}$. So in practice we can choose prime numbers around
1714 256 that are congruent to 3 modulus 4. With 32-bits numbers, only the
1715 4 least significant bits of $x_n$ can be chosen (the maximum number of
1716 indistinguishable bits is lesser than or equals to
1717 $log_2(log_2(M))$). In other words, to generate a 32-bits number, we need to use
1718 8 times the BBS algorithm with possibly different combinations of $M$. This
1719 approach is not sufficient to be able to pass all the tests of TestU01,
1720 as small values of $M$ for the BBS lead to
1721 small periods. So, in order to add randomness we have proceeded with
1722 the followings modifications.
1725 Firstly, we define 16 arrangement arrays instead of 2 (as described in
1726 Algorithm \ref{algo:gpu_kernel2}), but only 2 of them are used at each call of
1727 the PRNG kernels. In practice, the selection of combination
1728 arrays to be used is different for all the threads. It is determined
1729 by using the three last bits of two internal variables used by BBS.
1730 %This approach adds more randomness.
1731 In Algorithm~\ref{algo:bbs_gpu},
1732 character \& is for the bitwise AND. Thus using \&7 with a number
1733 gives the last 3 bits, thus providing a number between 0 and 7.
1735 Secondly, after the generation of the 8 BBS numbers for each thread, we
1736 have a 32-bits number whose period is possibly quite small. So
1737 to add randomness, we generate 4 more BBS numbers to
1738 shift the 32-bits numbers, and add up to 6 new bits. This improvement is
1739 described in Algorithm~\ref{algo:bbs_gpu}. In practice, the last 2 bits
1740 of the first new BBS number are used to make a left shift of at most
1741 3 bits. The last 3 bits of the second new BBS number are added to the
1742 strategy whatever the value of the first left shift. The third and the
1743 fourth new BBS numbers are used similarly to apply a new left shift
1746 Finally, as we use 8 BBS numbers for each thread, the storage of these
1747 numbers at the end of the kernel is performed using a rotation. So,
1748 internal variable for BBS number 1 is stored in place 2, internal
1749 variable for BBS number 2 is stored in place 3, ..., and finally, internal
1750 variable for BBS number 8 is stored in place 1.
1755 \KwIn{InternalVarBBSArray: array with internal variables of the 8 BBS
1757 NumThreads: Number of threads\;
1758 array\_comb: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\;
1759 array\_shift[4]=\{0,1,3,7\}\;
1762 \KwOut{NewNb: array containing random numbers in global memory}
1763 \If{threadId is concerned} {
1764 retrieve data from InternalVarBBSArray[threadId] in local variables including shared memory and x\;
1765 we consider that bbs1 ... bbs8 represent the internal states of the 8 BBS numbers\;
1766 offset = threadIdx\%combination\_size\;
1767 o1 = threadIdx-offset+array\_comb[bbs1\&7][offset]\;
1768 o2 = threadIdx-offset+array\_comb[8+bbs2\&7][offset]\;
1775 \tcp{two new shifts}
1776 shift=BBS3(bbs3)\&3\;
1778 t|=BBS1(bbs1)\&array\_shift[shift]\;
1779 shift=BBS7(bbs7)\&3\;
1781 t|=BBS2(bbs2)\&array\_shift[shift]\;
1782 t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
1783 shared\_mem[threadId]=t\;
1784 x = x\textasciicircum t\;
1786 store the new PRNG in NewNb[NumThreads*threadId+i]\;
1788 store internal variables in InternalVarXorLikeArray[threadId] using a rotation\;
1791 \caption{main kernel for the BBS based PRNG GPU}
1792 \label{algo:bbs_gpu}
1795 In Algorithm~\ref{algo:bbs_gpu}, $n$ is for the quantity of random numbers that
1796 a thread has to generate. The operation t<<=4 performs a left shift of 4 bits
1797 on the variable $t$ and stores the result in $t$, and $BBS1(bbs1)\&15$ selects
1798 the last four bits of the result of $BBS1$. Thus an operation of the form
1799 $t<<=4; t|=BBS1(bbs1)\&15\;$ realizes in $t$ a left shift of 4 bits, and then
1800 puts the 4 last bits of $BBS1(bbs1)$ in the four last positions of $t$. Let us
1801 remark that the initialization $t$ is not a necessity as we fill it 4 bits by 4
1802 bits, until having obtained 32-bits. The two last new shifts are realized in
1803 order to enlarge the small periods of the BBS used here, to introduce a kind of
1804 variability. In these operations, we make twice a left shift of $t$ of \emph{at
1805 most} 3 bits, represented by \texttt{shift} in the algorithm, and we put
1806 \emph{exactly} the \texttt{shift} last bits from a BBS into the \texttt{shift}
1807 last bits of $t$. For this, an array named \texttt{array\_shift}, containing the
1808 correspondence between the shift and the number obtained with \texttt{shift} 1
1809 to make the \texttt{and} operation is used. For example, with a left shift of 0,
1810 we make an and operation with 0, with a left shift of 3, we make an and
1811 operation with 7 (represented by 111 in binary mode).
1813 It should be noticed that this generator has once more the form $x^{n+1} = x^n \oplus S^n$,
1814 where $S^n$ is referred in this algorithm as $t$: each iteration of this
1815 PRNG ends with $x = x \wedge t$. This $S^n$ is only constituted
1816 by secure bits produced by the BBS generator, and thus, due to
1817 Proposition~\ref{cryptopreuve}, the resulted PRNG is cryptographically
1823 \subsection{Practical Security Evaluation}
1825 Suppose now that the PRNG will work during
1826 $M=100$ time units, and that during this period,
1827 an attacker can realize $10^{12}$ clock cycles.
1828 We thus wonder whether, during the PRNG's
1829 lifetime, the attacker can distinguish this
1830 sequence from truly random one, with a probability
1831 greater than $\varepsilon = 0.2$.
1832 We consider that $N$ has 900 bits.
1834 The random process is the BBS generator, which
1835 is cryptographically secure. More precisely, it
1836 is $(T,\varepsilon)-$secure: no
1837 $(T,\varepsilon)-$distinguishing attack can be
1838 successfully realized on this PRNG, if~\cite{Fischlin}
1840 T \leqslant \dfrac{L(N)}{6 N (log_2(N))\varepsilon^{-2}M^2}-2^7 N \varepsilon^{-2} M^2 log_2 (8 N \varepsilon^{-1}M)
1842 where $M$ is the length of the output ($M=100$ in
1843 our example), and $L(N)$ is equal to
1845 2.8\times 10^{-3} exp \left(1.9229 \times (N ~ln(2)^\frac{1}{3}) \times ln(N~ln 2)^\frac{2}{3}\right)
1847 is the number of clock cycles to factor a $N-$bit
1850 A direct numerical application shows that this attacker
1851 cannot achieve its $(10^{12},0.2)$ distinguishing
1852 attack in that context.
1856 \subsection{Toward a Cryptographically Secure and Chaotic Asymmetric Cryptosystem}
1857 \label{Blum-Goldwasser}
1858 We finish this research work by giving some thoughts about the use of
1859 the proposed PRNG in an asymmetric cryptosystem.
1860 This first approach will be further investigated in a future work.
1862 \subsubsection{Recalls of the Blum-Goldwasser Probabilistic Cryptosystem}
1864 The Blum-Goldwasser cryptosystem is a cryptographically secure asymmetric key encryption algorithm
1865 proposed in 1984~\cite{Blum:1985:EPP:19478.19501}. The encryption algorithm
1866 implements a XOR-based stream cipher using the BBS PRNG, in order to generate
1867 the keystream. Decryption is done by obtaining the initial seed thanks to
1868 the final state of the BBS generator and the secret key, thus leading to the
1869 reconstruction of the keystream.
1871 The key generation consists in generating two prime numbers $(p,q)$,
1872 randomly and independently of each other, that are
1873 congruent to 3 mod 4, and to compute the modulus $N=pq$.
1874 The public key is $N$, whereas the secret key is the factorization $(p,q)$.
1877 Suppose Bob wishes to send a string $m=(m_0, \dots, m_{L-1})$ of $L$ bits to Alice:
1879 \item Bob picks an integer $r$ randomly in the interval $\llbracket 1,N\rrbracket$ and computes $x_0 = r^2~mod~N$.
1880 \item He uses the BBS to generate the keystream of $L$ pseudorandom bits $(b_0, \dots, b_{L-1})$, as follows. For $i=0$ to $L-1$,
1883 \item While $i \leqslant L-1$:
1885 \item Set $b_i$ equal to the least-significant\footnote{As signaled previously, BBS can securely output up to $\mathsf{N} = \lfloor log(log(N)) \rfloor$ of the least-significant bits of $x_i$ during each round.} bit of $x_i$,
1887 \item $x_i = (x_{i-1})^2~mod~N.$
1890 \item The ciphertext is computed by XORing the plaintext bits $m$ with the keystream: $ c = (c_0, \dots, c_{L-1}) = m \oplus b$. This ciphertext is $[c, y]$, where $y=x_{0}^{2^{L}}~mod~N.$
1894 When Alice receives $\left[(c_0, \dots, c_{L-1}), y\right]$, she can recover $m$ as follows:
1896 \item Using the secret key $(p,q)$, she computes $r_p = y^{((p+1)/4)^{L}}~mod~p$ and $r_q = y^{((q+1)/4)^{L}}~mod~q$.
1897 \item The initial seed can be obtained using the following procedure: $x_0=q(q^{-1}~{mod}~p)r_p + p(p^{-1}~{mod}~q)r_q~{mod}~N$.
1898 \item She recomputes the bit-vector $b$ by using BBS and $x_0$.
1899 \item Alice finally computes the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$.
1903 \subsubsection{Proposal of a new Asymmetric Cryptosystem Adapted from Blum-Goldwasser}
1905 We propose to adapt the Blum-Goldwasser protocol as follows.
1906 Let $\mathsf{N} = \lfloor log(log(N)) \rfloor$ be the number of bits that can
1907 be obtained securely with the BBS generator using the public key $N$ of Alice.
1908 Alice will pick randomly $S^0$ in $\llbracket 0, 2^{\mathsf{N}-1}\rrbracket$ too, and
1909 her new public key will be $(S^0, N)$.
1911 To encrypt his message, Bob will compute
1912 %%RAPH : ici, j'ai mis un simple $
1914 $c = \left(m_0 \oplus (b_0 \oplus S^0), m_1 \oplus (b_0 \oplus b_1 \oplus S^0), \hdots, \right.$
1915 $ \left. m_{L-1} \oplus (b_0 \oplus b_1 \hdots \oplus b_{L-1} \oplus S^0) \right)$
1917 instead of $\left(m_0 \oplus b_0, m_1 \oplus b_1, \hdots, m_{L-1} \oplus b_{L-1} \right)$.
1919 The same decryption stage as in Blum-Goldwasser leads to the sequence
1920 $\left(m_0 \oplus S^0, m_1 \oplus S^0, \hdots, m_{L-1} \oplus S^0 \right)$.
1921 Thus, with a simple use of $S^0$, Alice can obtain the plaintext.
1922 By doing so, the proposed generator is used in place of BBS, leading to
1923 the inheritance of all the properties presented in this paper.
1925 \section{Conclusion}
1928 In this paper, a formerly proposed PRNG based on chaotic iterations
1929 has been generalized to improve its speed. It has been proven to be
1930 chaotic according to Devaney.
1931 Efficient implementations on GPU using xor-like PRNGs as input generators
1932 have shown that a very large quantity of pseudorandom numbers can be generated per second (about
1933 20Gsamples/s), and that these proposed PRNGs succeed to pass the hardest battery in TestU01,
1934 namely the BigCrush.
1935 Furthermore, we have shown that when the inputted generator is cryptographically
1936 secure, then it is the case too for the PRNG we propose, thus leading to
1937 the possibility to develop fast and secure PRNGs using the GPU architecture.
1938 \begin{color}{red} An improvement of the Blum-Goldwasser cryptosystem, making it
1939 behaves chaotically, has finally been proposed. \end{color}
1941 In future work we plan to extend this research, building a parallel PRNG for clusters or
1942 grid computing. Topological properties of the various proposed generators will be investigated,
1943 and the use of other categories of PRNGs as input will be studied too. The improvement
1944 of Blum-Goldwasser will be deepened. Finally, we
1945 will try to enlarge the quantity of pseudorandom numbers generated per second either
1946 in a simulation context or in a cryptographic one.
1950 \bibliographystyle{plain}
1951 \bibliography{mabase}