-\documentclass{article}
+%\documentclass{article}
+\documentclass[10pt,journal,letterpaper,compsoc]{IEEEtran}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{fullpage}
\usepackage{amscd}
\usepackage{moreverb}
\usepackage{commath}
-\usepackage{algorithm2e}
+\usepackage[ruled,vlined]{algorithm2e}
\usepackage{listings}
\usepackage[standard]{ntheorem}
\begin{document}
\author{Jacques M. Bahi, Rapha\"{e}l Couturier, Christophe
-Guyeux, and Pierre-Cyrille Heam\thanks{Authors in alphabetic order}}
+Guyeux, and Pierre-Cyrille Héam\thanks{Authors in alphabetic order}}
-\maketitle
+\IEEEcompsoctitleabstractindextext{
\begin{abstract}
In this paper we present a new pseudorandom number generator (PRNG) on
graphics processing units (GPU). This PRNG is based on the so-called chaotic iterations. It
is firstly proven to be chaotic according to the Devaney's formulation. We thus propose an efficient
implementation for GPU that successfully passes the {\it BigCrush} tests, deemed to be the hardest
battery of tests in TestU01. Experiments show that this PRNG can generate
-about 20 billions of random numbers per second on Tesla C1060 and NVidia GTX280
+about 20 billion of random numbers per second on Tesla C1060 and NVidia GTX280
cards.
-It is finally established that, under reasonable assumptions, the proposed PRNG can be cryptographically
+It is then established that, under reasonable assumptions, the proposed PRNG can be cryptographically
secure.
+A chaotic version of the Blum-Goldwasser asymmetric key encryption scheme is finally proposed.
\end{abstract}
+}
+
+\maketitle
+
+\IEEEdisplaynotcompsoctitleabstractindextext
+\IEEEpeerreviewmaketitle
+
\section{Introduction}
-Randomness is of importance in many fields as scientific simulations or cryptography.
+Randomness is of importance in many fields such as scientific simulations or cryptography.
``Random numbers'' can mainly be generated either by a deterministic and reproducible algorithm
called a pseudorandom number generator (PRNG), or by a physical non-deterministic
process having all the characteristics of a random noise, called a truly random number
In this paper, we focus on reproducible generators, useful for instance in
Monte-Carlo based simulators or in several cryptographic schemes.
These domains need PRNGs that are statistically irreproachable.
-On some fields as in numerical simulations, speed is a strong requirement
+In some fields such as in numerical simulations, speed is a strong requirement
that is usually attained by using parallel architectures. In that case,
-a recurrent problem is that a deflate of the statistical qualities is often
+a recurrent problem is that a deflation of the statistical qualities is often
reported, when the parallelization of a good PRNG is realized.
This is why ad-hoc PRNGs for each possible architecture must be found to
achieve both speed and randomness.
On the other side, speed is not the main requirement in cryptography: the great
-need is to define \emph{secure} generators being able to withstand malicious
+need is to define \emph{secure} generators able to withstand malicious
attacks. Roughly speaking, an attacker should not be able in practice to make
the distinction between numbers obtained with the secure generator and a true random
sequence.
-Finally, a small part of the community working in this domain focus on a
+Finally, a small part of the community working in this domain focuses on a
third requirement, that is to define chaotic generators.
The main idea is to take benefits from a chaotic dynamical system to obtain a
-generator that is unpredictable, disordered, sensible to its seed, or in other words chaotic.
+generator that is unpredictable, disordered, sensible to its seed, or in other word chaotic.
Their desire is to map a given chaotic dynamics into a sequence that seems random
and unassailable due to chaos.
However, the chaotic maps used as a pattern are defined in the real line
The authors' opinion is that topological properties of disorder, as they are
properly defined in the mathematical theory of chaos, can reinforce the quality
of a PRNG. But they are not substitutable for security or statistical perfection.
-Indeed, to the authors' point of view, such properties can be useful in the two following situations. On the
+Indeed, to the authors' mind, such properties can be useful in the two following situations. On the
one hand, a post-treatment based on a chaotic dynamical system can be applied
to a PRNG statistically deflective, in order to improve its statistical
properties. Such an improvement can be found, for instance, in~\cite{bgw09:ip,bcgr11:ip}.
statistical perfection refers to the ability to pass the whole
{\it BigCrush} battery of tests, which is widely considered as the most
stringent statistical evaluation of a sequence claimed as random.
-This battery can be found into the well-known TestU01 package~\cite{LEcuyerS07}.
+This battery can be found in the well-known TestU01 package~\cite{LEcuyerS07}.
Chaos, for its part, refers to the well-established definition of a
chaotic dynamical system proposed by Devaney~\cite{Devaney}.
numbers inside a GPU when a scientific application runs in it. This remark
motivates our proposal of a chaotic and statistically perfect PRNG for GPU.
Such device
-allows us to generated almost 20 billions of pseudorandom numbers per second.
-Last, but not least, we show that the proposed post-treatment preserves the
+allows us to generate almost 20 billion of pseudorandom numbers per second.
+Furthermore, we show that the proposed post-treatment preserves the
cryptographical security of the inputted PRNG, when this last has such a
property.
+Last, but not least, we propose a rewriting of the Blum-Goldwasser asymmetric
+key encryption protocol by using the proposed method.
The remainder of this paper is organized as follows. In Section~\ref{section:related
works} we review some GPU implementations of PRNGs. Section~\ref{section:BASIC
RECALLS} gives some basic recalls on the well-known Devaney's formulation of chaos,
and on an iteration process called ``chaotic
iterations'' on which the post-treatment is based.
-Proofs of chaos are given in Section~\ref{sec:pseudorandom}.
+The proposed PRNG and its proof of chaos are given in Section~\ref{sec:pseudorandom}.
Section~\ref{sec:efficient PRNG} presents an efficient
implementation of this chaotic PRNG on a CPU, whereas Section~\ref{sec:efficient PRNG
- gpu} describes the GPU implementation.
+ gpu} describes and evaluates theoretically the GPU implementation.
Such generators are experimented in
Section~\ref{sec:experiments}.
We show in Section~\ref{sec:security analysis} that, if the inputted
generator provided by the post-treatment.
Such a proof leads to the proposition of a cryptographically secure and
chaotic generator on GPU based on the famous Blum Blum Shum
-in Section~\ref{sec:CSGPU}.
+in Section~\ref{sec:CSGPU}, and to an improvement of the
+Blum-Goldwasser protocol in Sect.~\ref{Blum-Goldwasser}.
This research work ends by a conclusion section, in which the contribution is
summarized and intended future work is presented.
\section{Related works on GPU based PRNGs}
\label{section:related works}
-Numerous research works on defining GPU based PRNGs have yet been proposed in the
-literature, so that completeness is impossible.
+Numerous research works on defining GPU based PRNGs have already been proposed in the
+literature, so that exhaustivity is impossible.
This is why authors of this document only give reference to the most significant attempts
in this domain, from their subjective point of view.
The quantity of pseudorandom numbers generated per second is mentioned here
In \cite{ZRKB10}, the authors propose different versions of efficient GPU PRNGs
based on Lagged Fibonacci or Hybrid Taus. They have used these
PRNGs for Langevin simulations of biomolecules fully implemented on
-GPU. Performance of the GPU versions are far better than those obtained with a
+GPU. Performances of the GPU versions are far better than those obtained with a
CPU, and these PRNGs succeed to pass the {\it BigCrush} battery of TestU01.
However the evaluations of the proposed PRNGs are only statistical ones.
FPGA appears as the fastest and the most
efficient architecture, providing the fastest number of generated pseudorandom numbers
per joule.
-However, we can notice that authors can ``only'' generate between 11 and 16GSamples/s
+However, we notice that authors can ``only'' generate between 11 and 16GSamples/s
with a GTX 280 GPU, which should be compared with
the results presented in this document.
We can remark too that the PRNGs proposed in~\cite{conf/fpga/ThomasHL09} are only
-able to pass the {\it Crush} battery, which is very easy compared to the {\it Big Crush} one.
+able to pass the {\it Crush} battery, which is far easier than the {\it Big Crush} one.
Lastly, Cuda has developed a library for the generation of pseudorandom numbers called
Curand~\cite{curand11}. Several PRNGs are implemented, among
But their PRNGs cannot pass the whole TestU01 battery (only one test is failed).
\newline
\newline
-We can finally remark that, to the best of our knowledge, no GPU implementation have been proven to be chaotic, and the cryptographically secure property is surprisingly never regarded.
+We can finally remark that, to the best of our knowledge, no GPU implementation has been proven to be chaotic, and the cryptographically secure property has surprisingly never been considered.
\section{Basic Recalls}
\label{section:BASIC RECALLS}
This section is devoted to basic definitions and terminologies in the fields of
-topological chaos and chaotic iterations.
+topological chaos and chaotic iterations. We assume the reader is familiar
+with basic notions on topology (see for instance~\cite{Devaney}).
+
+
\subsection{Devaney's Chaotic Dynamical Systems}
In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and $V_{i}$
\mathcal{X} \rightarrow \mathcal{X}$.
\begin{definition}
-$f$ is said to be \emph{topologically transitive} if, for any pair of open sets
+The function $f$ is said to be \emph{topologically transitive} if, for any pair of open sets
$U,V \subset \mathcal{X}$, there exists $k>0$ such that $f^k(U) \cap V \neq
\varnothing$.
\end{definition}
\begin{definition}[Devaney's formulation of chaos~\cite{Devaney}]
-$f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and
+The function $f$ is said to be \emph{chaotic} on $(\mathcal{X},\tau)$ if $f$ is regular and
topologically transitive.
\end{definition}
on a metric space $(\mathcal{X},d)$ by:
\begin{definition}
-\label{sensitivity} $f$ has \emph{sensitive dependence on initial conditions}
+\label{sensitivity} The function $f$ has \emph{sensitive dependence on initial conditions}
if there exists $\delta >0$ such that, for any $x\in \mathcal{X}$ and any
neighborhood $V$ of $x$, there exist $y\in V$ and $n > 0$ such that
$d\left(f^{n}(x), f^{n}(y)\right) >\delta $.
-$\delta$ is called the \emph{constant of sensitivity} of $f$.
+The constant $\delta$ is called the \emph{constant of sensitivity} of $f$.
\end{definition}
Indeed, Banks \emph{et al.} have proven in~\cite{Banks92} that when $f$ is
Let $\delta $ be the \emph{discrete Boolean metric}, $\delta
(x,y)=0\Leftrightarrow x=y.$ Given a function $f$, define the function:
+%%RAPH : ici j'ai coupé la dernière ligne en 2, c'est moche mais bon
\begin{equation}
\begin{array}{lrll}
F_{f}: & \llbracket1;\mathsf{N}\rrbracket\times \mathds{B}^{\mathsf{N}} &
\longrightarrow & \mathds{B}^{\mathsf{N}} \\
-& (k,E) & \longmapsto & \left( E_{j}.\delta (k,j)+f(E)_{k}.\overline{\delta
+& (k,E) & \longmapsto & \left( E_{j}.\delta (k,j)+ \right.\\
+& & & \left. f(E)_{k}.\overline{\delta
(k,j)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},%
\end{array}%
\end{equation}%
\item In addition, if two systems present the same cells and their respective
strategies start with the same terms, then the distance between these two points
must be small because the evolution of the two systems will be the same for a
-while. Indeed, the two dynamical systems start with the same initial condition,
-use the same update function, and as strategies are the same for a while, then
-components that are updated are the same too.
+while. Indeed, both dynamical systems start with the same initial condition,
+use the same update function, and as strategies are the same for a while, furthermore
+updated components are the same as well.
\end{itemize}
The distance presented above follows these recommendations. Indeed, if the floor
value $\lfloor d(X,Y)\rfloor $ is equal to $n$, then the systems $E, \check{E}$
precisely, this floating part is less than $10^{-k}$ if and only if the first
$k$ terms of the two strategies are equal. Moreover, if the $k^{th}$ digit is
nonzero, then the $k^{th}$ terms of the two strategies are different.
-The impact of this choice for a distance will be investigate at the end of the document.
+The impact of this choice for a distance will be investigated at the end of the document.
Finally, it has been established in \cite{guyeux10} that,
path from $x$ to $x'$ in $\Gamma(f)$ if and only if there exists a
strategy $s$ such that the parallel iteration of $G_f$ from the
initial point $(s,x)$ reaches the point $x'$.
-
-We have finally proven in \cite{bcgr11:ip} that,
+We have then proven in \cite{bcgr11:ip} that,
\begin{theorem}
if and only if $\Gamma(f)$ is strongly connected.
\end{theorem}
-This result of chaos has lead us to study the possibility to build a
+Finally, we have established in \cite{bcgr11:ip} that,
+\begin{theorem}
+ Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its
+ iteration graph, $\check{M}$ its adjacency
+ matrix and $M$
+ a $n\times n$ matrix defined by
+ $
+ M_{ij} = \frac{1}{n}\check{M}_{ij}$ %\textrm{
+ if $i \neq j$ and
+ $M_{ii} = 1 - \frac{1}{n} \sum\limits_{j=1, j\neq i}^n \check{M}_{ij}$ otherwise.
+
+ If $\Gamma(f)$ is strongly connected, then
+ the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows
+ a law that tends to the uniform distribution
+ if and only if $M$ is a double stochastic matrix.
+\end{theorem}
+
+
+These results of chaos and uniform distribution have led us to study the possibility of building a
pseudorandom number generator (PRNG) based on the chaotic iterations.
As $G_f$, defined on the domain $\llbracket 1 ; \mathsf{N} \rrbracket^{\mathds{N}}
-\times \mathds{B}^\mathsf{N}$, is build from Boolean networks $f : \mathds{B}^\mathsf{N}
+\times \mathds{B}^\mathsf{N}$, is built from Boolean networks $f : \mathds{B}^\mathsf{N}
\rightarrow \mathds{B}^\mathsf{N}$, we can preserve the theoretical properties on $G_f$
-during implementations (due to the discrete nature of $f$). It is as if
+during implementations (due to the discrete nature of $f$). Indeed, it is as if
$\mathds{B}^\mathsf{N}$ represents the memory of the computer whereas $\llbracket 1 ; \mathsf{N}
\rrbracket^{\mathds{N}}$ is its input stream (the seeds, for instance, in PRNG, or a physical noise in TRNG).
+Let us finally remark that the vectorial negation satisfies the hypotheses of both theorems above.
\section{Application to Pseudorandomness}
\label{sec:pseudorandom}
possesses various chaos properties that none of the generators used as input
present.
+
\begin{algorithm}[h!]
-%\begin{scriptsize}
+\begin{small}
\KwIn{a function $f$, an iteration number $b$, an initial configuration $x^0$
($n$ bits)}
\KwOut{a configuration $x$ ($n$ bits)}
$x\leftarrow{F_f(s,x)}$\;
}
return $x$\;
-%\end{scriptsize}
+\end{small}
\caption{PRNG with chaotic functions}
\label{CI Algorithm}
\end{algorithm}
+
+
+
\begin{algorithm}[h!]
+\begin{small}
\KwIn{the internal configuration $z$ (a 32-bit word)}
\KwOut{$y$ (a 32-bit word)}
$z\leftarrow{z\oplus{(z\ll13)}}$\;
$z\leftarrow{z\oplus{(z\ll5)}}$\;
$y\leftarrow{z}$\;
return $y$\;
-\medskip
+\end{small}
\caption{An arbitrary round of \textit{XORshift} algorithm}
\label{XORshift}
\end{algorithm}
an integer $b$, ensuring that the number of executed iterations is at least $b$
and at most $2b+1$; and an initial configuration $x^0$.
It returns the new generated configuration $x$. Internally, it embeds two
-\textit{XORshift}$(k)$ PRNGs~\cite{Marsaglia2003} that returns integers
+\textit{XORshift}$(k)$ PRNGs~\cite{Marsaglia2003} that return integers
uniformly distributed
into $\llbracket 1 ; k \rrbracket$.
\textit{XORshift} is a category of very fast PRNGs designed by George Marsaglia,
$2^{32}-1=4.29\times10^9$, is summed up in Algorithm~\ref{XORshift}. It is used
in our PRNG to compute the strategy length and the strategy elements.
-
-We have proven in \cite{bcgr11:ip} that,
-\begin{theorem}
- Let $f: \mathds{B}^{n} \rightarrow \mathds{B}^{n}$, $\Gamma(f)$ its
- iteration graph, $\check{M}$ its adjacency
- matrix and $M$ a $n\times n$ matrix defined as in the previous lemma.
- If $\Gamma(f)$ is strongly connected, then
- the output of the PRNG detailed in Algorithm~\ref{CI Algorithm} follows
- a law that tends to the uniform distribution
- if and only if $M$ is a double stochastic matrix.
-\end{theorem}
-
-This former generator as successively passed various batteries of statistical tests, as the NIST~\cite{bcgr11:ip}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07}.
+This former generator has successively passed various batteries of statistical tests, as the NIST~\cite{bcgr11:ip}, DieHARD~\cite{Marsaglia1996}, and TestU01~\cite{LEcuyerS07} ones.
\subsection{Improving the Speed of the Former Generator}
\label{equation Oplus}
\end{equation}
where $\oplus$ is for the bitwise exclusive or between two integers.
-This rewritten can be understood as follows. The $n-$th term $S^n$ of the
+This rewriting can be understood as follows. The $n-$th term $S^n$ of the
sequence $S$, which is an integer of $\mathsf{N}$ binary digits, presents
the list of cells to update in the state $x^n$ of the system (represented
as an integer having $\mathsf{N}$ bits too). More precisely, the $k-$th
$\mathcal{S}^n \subset \llbracket 1, \mathsf{N} \rrbracket$ is such that
$k \in \mathcal{S}^n$ if and only if the $k-$th digit in the binary
decomposition of $S^n$ is 1. Such chaotic iterations are more general
-than the ones presented in Definition \ref{Def:chaotic iterations} for
-the fact that, instead of updating only one term at each iteration,
+than the ones presented in Definition \ref{Def:chaotic iterations} because, instead of updating only one term at each iteration,
we select a subset of components to change.
Obviously, replacing Algorithm~\ref{CI Algorithm} by
-Equation~\ref{equation Oplus}, possible when the iteration function is
+Equation~\ref{equation Oplus}, which is possible when the iteration function is
the vectorial negation, leads to a speed improvement. However, proofs
of chaos obtained in~\cite{bg10:ij} have been established
only for chaotic iterations of the form presented in Definition
where $\mathcal{P}\left(X\right)$ is for the powerset of the set $X$, that is, $Y \in \mathcal{P}\left(X\right) \Longleftrightarrow Y \subset X$.
Given a function $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, define the function:
+%%RAPH : j'ai coupé la dernière ligne en 2, c'est moche
\begin{equation}
\begin{array}{lrll}
F_{f}: & \mathcal{P}\left(\llbracket1;\mathsf{N}\rrbracket \right) \times \mathds{B}^{\mathsf{N}} &
\longrightarrow & \mathds{B}^{\mathsf{N}} \\
-& (P,E) & \longmapsto & \left( E_{j}.\chi (j,P)+f(E)_{j}.\overline{\chi
-(j,P)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},%
+& (P,E) & \longmapsto & \left( E_{j}.\chi (j,P)+\right.\\
+& & &\left.f(E)_{j}.\overline{\chi(j,P)}\right) _{j\in \llbracket1;\mathsf{N}\rrbracket},%
\end{array}%
\end{equation}%
where + and . are the Boolean addition and product operations, and $\overline{x}$
\end{equation}
\noindent and the map defined on $\mathcal{X}$:
\begin{equation}
-G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), \label{Gf}
+G_f\left(S,E\right) = \left(\sigma(S), F_f(i(S),E)\right), %\label{Gf} %%RAPH, j'ai viré ce label qui existe déjà avant...
\end{equation}
\noindent where $\sigma$ is the \emph{shift} function defined by $\sigma
(S^{n})_{n\in \mathds{N}}\in \mathcal{P}\left(\llbracket 1 ; \mathsf{N} \rrbracket\right)^\mathds{N}\longrightarrow (S^{n+1})_{n\in
\right.
\end{equation}%
-Another time, a shift function appears as a component of these general chaotic
+Once more, a shift function appears as a component of these general chaotic
iterations.
To study the Devaney's chaos property, a distance between two points
d(X,Y)=d_{e}(E,\check{E})+d_{s}(S,\check{S}),
\label{nouveau d}
\end{equation}
-\noindent where
-\begin{equation}
-\left\{
-\begin{array}{lll}
-\displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
-}\delta (E_{k},\check{E}_{k})}\textrm{ is another time the Hamming distance}, \\
-\displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
-\sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}.%
-\end{array}%
-\right.
-\end{equation}
+\noindent where $ \displaystyle{d_{e}(E,\check{E})} = \displaystyle{\sum_{k=1}^{\mathsf{N}%
+ }\delta (E_{k},\check{E}_{k})}$ is once more the Hamming distance, and
+$ \displaystyle{d_{s}(S,\check{S})} = \displaystyle{\dfrac{9}{\mathsf{N}}%
+ \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}$,
+%%RAPH : ici, j'ai supprimé tous les sauts à la ligne
+%% \begin{equation}
+%% \left\{
+%% \begin{array}{lll}
+%% \displaystyle{d_{e}(E,\check{E})} & = & \displaystyle{\sum_{k=1}^{\mathsf{N}%
+%% }\delta (E_{k},\check{E}_{k})} \textrm{ is once more the Hamming distance}, \\
+%% \displaystyle{d_{s}(S,\check{S})} & = & \displaystyle{\dfrac{9}{\mathsf{N}}%
+%% \sum_{k=1}^{\infty }\dfrac{|S^k\Delta {S}^k|}{10^{k}}}.%
+%% \end{array}%
+%% \right.
+%% \end{equation}
where $|X|$ is the cardinality of a set $X$ and $A\Delta B$ is for the symmetric difference, defined for sets A, B as
$A\,\Delta\,B = (A \setminus B) \cup (B \setminus A)$.
\begin{proof}
$d_e$ is the Hamming distance. We will prove that $d_s$ is a distance
-too, thus $d$ will be a distance as sum of two distances.
+too, thus $d$, as being the sum of two distances, will also be a distance.
\begin{itemize}
\item Obviously, $d_s(S,\check{S})\geqslant 0$, and if $S=\check{S}$, then
$d_s(S,\check{S})=0$. Conversely, if $d_s(S,\check{S})=0$, then
Before being able to study the topological behavior of the general
-chaotic iterations, we must firstly establish that:
+chaotic iterations, we must first establish that:
\begin{proposition}
For all $f:\mathds{B}^\mathsf{N} \longrightarrow \mathds{B}^\mathsf{N} $, the function $G_f$ is continuous on
G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is convergent to
0. Let $\varepsilon >0$. \medskip
\begin{itemize}
-\item If $\varepsilon \geqslant 1$, we see that distance
+\item If $\varepsilon \geqslant 1$, we see that the distance
between $\left( G_{f}(S^n,E^n)\right) $ and $\left( G_{f}(S,E)\right) $ is
strictly less than 1 after the $max(n_{0},n_{1})^{th}$ term (same state).
\medskip
the distance between $(S^n,E^n)$ and $(S,E)$ is strictly less than $%
10^{-(k+1)}\leqslant \varepsilon $.\bigskip \newline
In conclusion,
-$$
+%%RAPH : ici j'ai rajouté une ligne
+\begin{flushleft}$$
\forall \varepsilon >0,\exists N_{0}=max(n_{0},n_{1},n_{2})\in \mathds{N}%
-,\forall n\geqslant N_{0},
- d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right)
+,\forall n\geqslant N_{0},$$
+$$ d\left( G_{f}(S^n,E^n);G_{f}(S,E)\right)
\leqslant \varepsilon .
$$
+\end{flushleft}
$G_{f}$ is consequently continuous.
\end{proof}
claimed in the lemma.
\end{proof}
-We can now prove the Theorem~\ref{t:chaos des general}...
+<<<<<<< HEAD
+We can now prove the Theorem~\ref{t:chaos des general}.
+=======
+We can now prove Theorem~\ref{t:chaos des general}...
+>>>>>>> e55d237aba022a66cc2d7650d295b29169878f45
\begin{proof}[Theorem~\ref{t:chaos des general}]
Firstly, strong transitivity implies transitivity.
that $E$ is reached from $(S',E')$ after $t_2$ iterations of $G_f$.
Consider the strategy $\tilde S$ that alternates the first $t_1$ terms
-of $S$ and the first $t_2$ terms of $S'$: $$\tilde
-S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It
+of $S$ and the first $t_2$ terms of $S'$:
+%%RAPH : j'ai coupé la ligne en 2
+$$\tilde
+S=(S_0,\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,$$$$\dots,S_{t_1-1},S'_0,\dots,S'_{t_2-1},S_0,\dots).$$ It
is clear that $(\tilde S,E)$ is obtained from $(\tilde S,E)$ after
$t_1+t_2$ iterations of $G_f$. So $(\tilde S,E)$ is a periodic
point. Since $\tilde S_t=S_t$ for $t<t_1$, by the choice of $t_1$, we
An iteration of the system is simply the bitwise exclusive or between
the last computed state and the current strategy.
Topological properties of disorder exhibited by chaotic
-iterations can be inherited by the inputted generator, hoping by doing so to
+iterations can be inherited by the inputted generator, we hope by doing so to
obtain some statistical improvements while preserving speed.
+%%RAPH : j'ai viré tout ca
+%% Let us give an example using 16-bits numbers, to clearly understand how the bitwise xor operations
+%% are
+%% done.
+%% Suppose that $x$ and the strategy $S^i$ are given as
+%% binary vectors.
+%% Table~\ref{TableExemple} shows the result of $x \oplus S^i$.
+
+%% \begin{table}
+%% \begin{scriptsize}
+%% $$
+%% \begin{array}{|cc|cccccccccccccccc|}
+%% \hline
+%% x &=&1&0&1&1&1&0&1&0&1&0&0&1&0&0&1&0\\
+%% \hline
+%% S^i &=&0&1&1&0&0&1&1&0&1&1&1&0&0&1&1&1\\
+%% \hline
+%% x \oplus S^i&=&1&1&0&1&1&1&0&0&0&1&1&1&0&1&0&1\\
+%% \hline
+
+%% \hline
+%% \end{array}
+%% $$
+%% \end{scriptsize}
+%% \caption{Example of an arbitrary round of the proposed generator}
+%% \label{TableExemple}
+%% \end{table}
-Let us give an example using 16-bits numbers, to clearly understand how the bitwise xor operations
-are
-done.
-Suppose that $x$ and the strategy $S^i$ are given as
-binary vectors.
-Table~\ref{TableExemple} shows the result of $x \oplus S^i$.
-
-\begin{table}
-$$
-\begin{array}{|cc|cccccccccccccccc|}
-\hline
-x &=&1&0&1&1&1&0&1&0&1&0&0&1&0&0&1&0\\
-\hline
-S^i &=&0&1&1&0&0&1&1&0&1&1&1&0&0&1&1&1\\
-\hline
-x \oplus S^i&=&1&1&0&1&1&1&0&0&0&1&1&1&0&1&0&1\\
-\hline
-
-\hline
- \end{array}
-$$
-\caption{Example of an arbitrary round of the proposed generator}
-\label{TableExemple}
-\end{table}
\lstset{language=C,caption={C code of the sequential PRNG based on chaotic iterations},label=algo:seqCIPRNG}
+\begin{small}
\begin{lstlisting}
+
unsigned int CIPRNG() {
static unsigned int x = 123123123;
unsigned long t1 = xorshift();
return x;
}
\end{lstlisting}
+\end{small}
+In Listing~\ref{algo:seqCIPRNG} a sequential version of the proposed PRNG based
+on chaotic iterations is presented. The xor operator is represented by
+\textasciicircum. This function uses three classical 64-bits PRNGs, namely the
+\texttt{xorshift}, the \texttt{xor128}, and the
+\texttt{xorwow}~\cite{Marsaglia2003}. In the following, we call them ``xor-like
+PRNGs''. As each xor-like PRNG uses 64-bits whereas our proposed generator
+works with 32-bits, we use the command \texttt{(unsigned int)}, that selects the
+32 least significant bits of a given integer, and the code \texttt{(unsigned
+ int)(t$>>$32)} in order to obtain the 32 most significant bits of \texttt{t}.
-
-In Listing~\ref{algo:seqCIPRNG} a sequential version of the proposed PRNG based on chaotic iterations
- is presented. The xor operator is represented by \textasciicircum.
-This function uses three classical 64-bits PRNGs, namely the \texttt{xorshift}, the
-\texttt{xor128}, and the \texttt{xorwow}~\cite{Marsaglia2003}. In the following, we call them
-``xor-like PRNGs''.
-As
-each xor-like PRNG uses 64-bits whereas our proposed generator works with 32-bits,
-we use the command \texttt{(unsigned int)}, that selects the 32 least significant bits of a given integer, and the code
-\texttt{(unsigned int)(t3$>>$32)} in order to obtain the 32 most significant bits of \texttt{t}.
-
-So producing a pseudorandom number needs 6 xor operations
-with 6 32-bits numbers that are provided by 3 64-bits PRNGs. This version successfully passes the
+Thus producing a pseudorandom number needs 6 xor operations with 6 32-bits numbers
+that are provided by 3 64-bits PRNGs. This version successfully passes the
stringent BigCrush battery of tests~\cite{LEcuyerS07}.
\section{Efficient PRNGs based on Chaotic Iterations on GPU}
more local memory is used, and the less branching instructions are
used (if, while, ...), the better the performances on GPU is.
Obviously, having these requirements in mind, it is possible to build
-a program similar to the one presented in Algorithm
+a program similar to the one presented in Listing
\ref{algo:seqCIPRNG}, which computes pseudorandom numbers on GPU. To
do so, we must firstly recall that in the CUDA~\cite{Nvid10}
environment, threads have a local identifier called
\texttt{ThreadIdx}, which is relative to the block containing
-them. With CUDA parts of the code which are executed by the GPU are
+them. Furthermore, in CUDA, parts of the code that are executed by the GPU, are
called {\it kernels}.
It is possible to deduce from the CPU version a quite similar version adapted to GPU.
-The simple principle consists to make each thread of the GPU computing the CPU version of our PRNG.
+The simple principle consists in making each thread of the GPU computing the CPU version of our PRNG.
Of course, the three xor-like
PRNGs used in these computations must have different parameters.
-In a given thread, these lasts are
+In a given thread, these parameters are
randomly picked from another PRNGs.
The initialization stage is performed by the CPU.
To do it, the ISAAC PRNG~\cite{Jenkins96} is used to set all the
implementation of the xor128, the xorshift, and the xorwow respectively require
4, 5, and 6 unsigned long as internal variables.
-\begin{algorithm}
+\begin{algorithm}
+\begin{small}
\KwIn{InternalVarXorLikeArray: array with internal variables of the 3 xor-like
PRNGs in global memory\;
NumThreads: number of threads\;}
}
store internal variables in InternalVarXorLikeArray[threadIdx]\;
}
-
+\end{small}
\caption{Main kernel of the GPU ``naive'' version of the PRNG based on chaotic iterations}
\label{algo:gpu_kernel}
\end{algorithm}
+
+
Algorithm~\ref{algo:gpu_kernel} presents a naive implementation of the proposed PRNG on
GPU. Due to the available memory in the GPU and the number of threads
-used simultenaously, the number of random numbers that a thread can generate
+used simultaneously, the number of random numbers that a thread can generate
inside a kernel is limited (\emph{i.e.}, the variable \texttt{n} in
algorithm~\ref{algo:gpu_kernel}). For instance, if $100,000$ threads are used and
if $n=100$\footnote{in fact, we need to add the initial seed (a 32-bits number)},
This generator is able to pass the whole BigCrush battery of tests, for all
the versions that have been tested depending on their number of threads
-(called \texttt{NumThreads} in our algorithm, tested until $10$ millions).
+(called \texttt{NumThreads} in our algorithm, tested up to $5$ million).
\begin{remark}
-The proposed algorithm has the advantage to manipulate independent
+The proposed algorithm has the advantage of manipulating independent
PRNGs, so this version is easily adaptable on a cluster of computers too. The only thing
to ensure is to use a single ISAAC PRNG. To achieve this requirement, a simple solution consists in
using a master node for the initialization. This master node computes the initial parameters
-for all the differents nodes involves in the computation.
+for all the different nodes involved in the computation.
\end{remark}
\subsection{Improved Version for GPU}
contains the indexes of all threads and for which a combination has been
performed.
-In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used.
-The variable \texttt{offset} is computed using the value of
+In Algorithm~\ref{algo:gpu_kernel2}, two combination arrays are used. The
+variable \texttt{offset} is computed using the value of
\texttt{combination\_size}. Then we can compute \texttt{o1} and \texttt{o2}
-representing the indexes of the other threads whose results are used
-by the current one. In this algorithm, we consider that a 64-bits xor-like
-PRNG has been chosen, and so its two 32-bits parts are used.
+representing the indexes of the other threads whose results are used by the
+current one. In this algorithm, we consider that a 32-bits xor-like PRNG has
+been chosen. In practice, we use the xor128 proposed in~\cite{Marsaglia2003} in
+which unsigned longs (64 bits) have been replaced by unsigned integers (32
+bits).
-This version also can pass the whole {\it BigCrush} battery of tests.
+This version can also pass the whole {\it BigCrush} battery of tests.
\begin{algorithm}
-
+\begin{small}
\KwIn{InternalVarXorLikeArray: array with internal variables of 1 xor-like PRNGs
in global memory\;
NumThreads: Number of threads\;
-tab1, tab2: Arrays containing combinations of size combination\_size\;}
+array\_comb1, array\_comb2: Arrays containing combinations of size combination\_size\;}
\KwOut{NewNb: array containing random numbers in global memory}
\If{threadId is concerned} {
retrieve data from InternalVarXorLikeArray[threadId] in local variables including shared memory and x\;
offset = threadIdx\%combination\_size\;
- o1 = threadIdx-offset+tab1[offset]\;
- o2 = threadIdx-offset+tab2[offset]\;
+ o1 = threadIdx-offset+array\_comb1[offset]\;
+ o2 = threadIdx-offset+array\_comb2[offset]\;
\For{i=1 to n} {
t=xor-like()\;
- t=t $\wedge$ shmem[o1] $\wedge$ shmem[o2]\;
+ t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
shared\_mem[threadId]=t\;
- x = x $\wedge$ t\;
+ x = x\textasciicircum t\;
store the new PRNG in NewNb[NumThreads*threadId+i]\;
}
store internal variables in InternalVarXorLikeArray[threadId]\;
}
-
-\caption{main kernel for the chaotic iterations based PRNG GPU efficient
-version}
-\label{algo:gpu_kernel2}
+\end{small}
+\caption{Main kernel for the chaotic iterations based PRNG GPU efficient
+version\label{IR}}
+\label{algo:gpu_kernel2}
\end{algorithm}
\subsection{Theoretical Evaluation of the Improved Version}
To be certain that we are in the framework of Theorem~\ref{t:chaos des general},
we must guarantee that this dynamical system iterates on the space
$\mathcal{X} = \mathcal{P}\left(\llbracket 1, \mathsf{N} \rrbracket\right)^\mathds{N}\times\mathds{B}^\mathsf{N}$.
-The left term $x$ obviously belongs into $\mathds{B}^ \mathsf{N}$.
+The left term $x$ obviously belongs to $\mathds{B}^ \mathsf{N}$.
To prevent from any flaws of chaotic properties, we must check that the right
term (the last $t$), corresponding to the strategies, can possibly be equal to any
integer of $\llbracket 1, \mathsf{N} \rrbracket$.
prove by an immediate mathematical induction that, as the initial $x$
is uniformly distributed (it is provided by a cryptographically secure PRNG),
the two other stored values shmem[o1] and shmem[o2] are uniformly distributed too,
-(this can be stated by an immediate mathematical
-induction), and thus the next $x$ is finally uniformly distributed.
+(this is the induction hypothesis), and thus the next $x$ is finally uniformly distributed.
Thus Algorithm~\ref{algo:gpu_kernel2} is a concrete realization of the general
chaotic iterations presented previously, and for this reason, it satisfies the
cards have 240 cores.
In Figure~\ref{fig:time_xorlike_gpu} we compare the quantity of pseudorandom numbers
-generated per second with various xor-like based PRNG. In this figure, the optimized
+generated per second with various xor-like based PRNGs. In this figure, the optimized
versions use the {\it xor64} described in~\cite{Marsaglia2003}, whereas the naive versions
embed the three xor-like PRNGs described in Listing~\ref{algo:seqCIPRNG}. In
order to obtain the optimal performances, the storage of pseudorandom numbers
generation. Moreover this storage is completely
useless, in case of applications that consume the pseudorandom
numbers directly after generation. We can see that when the number of threads is greater
-than approximately 30,000 and lower than 5 millions, the number of pseudorandom numbers generated
+than approximately 30,000 and lower than 5 million, the number of pseudorandom numbers generated
per second is almost constant. With the naive version, this value ranges from 2.5 to
3GSamples/s. With the optimized version, it is approximately equal to
20GSamples/s. Finally we can remark that both GPU cards are quite similar, but in
\begin{figure}[htbp]
\begin{center}
- \includegraphics[scale=.7]{curve_time_xorlike_gpu.pdf}
+ \includegraphics[width=\columnwidth]{curve_time_xorlike_gpu.pdf}
\end{center}
\caption{Quantity of pseudorandom numbers generated per second with the xorlike-based PRNG}
\label{fig:time_xorlike_gpu}
-In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized
-BBS-based PRNG on GPU. On the Tesla C1060 we
-obtain approximately 700MSample/s and on the GTX 280 about 670MSample/s, which is
-obviously slower than the xorlike-based PRNG on GPU. However, we will show in the
-next sections that
-this new PRNG has a strong level of security, which is necessary paid by a speed
-reduction.
+In Figure~\ref{fig:time_bbs_gpu} we highlight the performances of the optimized
+BBS-based PRNG on GPU. On the Tesla C1060 we obtain approximately 700MSample/s
+and on the GTX 280 about 670MSample/s, which is obviously slower than the
+xorlike-based PRNG on GPU. However, we will show in the next sections that this
+new PRNG has a strong level of security, which is necessarily paid by a speed
+reduction.
\begin{figure}[htbp]
\begin{center}
- \includegraphics[scale=.7]{curve_time_bbs_gpu.pdf}
+ \includegraphics[width=\columnwidth]{curve_time_bbs_gpu.pdf}
\end{center}
\caption{Quantity of pseudorandom numbers generated per second using the BBS-based PRNG}
\label{fig:time_bbs_gpu}
All these experiments allow us to conclude that it is possible to
generate a very large quantity of pseudorandom numbers statistically perfect with the xor-like version.
-In a certain extend, it is the case too with the secure BBS-based version, the speed deflation being
+To a certain extend, it is also the case with the secure BBS-based version, the speed deflation being
explained by the fact that the former version has ``only''
chaotic properties and statistical perfection, whereas the latter is also cryptographically secure,
as it is shown in the next sections.
denoted by $uv$.
In a cryptographic context, a pseudorandom generator is a deterministic
algorithm $G$ transforming strings into strings and such that, for any
-seed $k$ of length $k$, $G(k)$ (the output of $G$ on the input $k$) has size
-$\ell_G(k)$ with $\ell_G(k)>k$.
+seed $s$ of length $m$, $G(s)$ (the output of $G$ on the input $s$) has size
+$\ell_G(m)$ with $\ell_G(m)>m$.
The notion of {\it secure} PRNGs can now be defined as follows.
\begin{definition}
A cryptographic PRNG $G$ is secure if for any probabilistic polynomial time
algorithm $D$, for any positive polynomial $p$, and for all sufficiently
-large $k$'s,
-$$| \mathrm{Pr}[D(G(U_k))=1]-Pr[D(U_{\ell_G(k)})=1]|< \frac{1}{p(k)},$$
+large $m$'s,
+$$| \mathrm{Pr}[D(G(U_m))=1]-Pr[D(U_{\ell_G(m)})=1]|< \frac{1}{p(m)},$$
where $U_r$ is the uniform distribution over $\{0,1\}^r$ and the
-probabilities are taken over $U_N$, $U_{\ell_G(N)}$ as well as over the
+probabilities are taken over $U_m$, $U_{\ell_G(m)}$ as well as over the
internal coin tosses of $D$.
\end{definition}
negligible probability. The interested reader is referred
to~\cite[chapter~3]{Goldreich} for more information. Note that it is
quite easily possible to change the function $\ell$ into any polynomial
-function $\ell^\prime$ satisfying $\ell^\prime(N)>N)$~\cite[Chapter 3.3]{Goldreich}.
+function $\ell^\prime$ satisfying $\ell^\prime(m)>m)$~\cite[Chapter 3.3]{Goldreich}.
The generation schema developed in (\ref{equation Oplus}) is based on a
pseudorandom generator. Let $H$ be a cryptographic PRNG. We may assume,
the $S_i$'s). The cryptographic PRNG $X$ defined in (\ref{equation Oplus})
is the algorithm mapping any string of length $2N$ $x_0S_0$ into the string
$(x_0\oplus S_0 \oplus S_1)(x_0\oplus S_0 \oplus S_1\oplus S_2)\ldots
-(x_o\bigoplus_{i=0}^{i=k}S_i)$. Particularly one has $\ell_{X}(2N)=kN=\ell_H(N)$.
+(x_o\bigoplus_{i=0}^{i=k}S_i)$. One in particular has $\ell_{X}(2N)=kN=\ell_H(N)$.
We claim now that if this PRNG is secure,
then the new one is secure too.
by a direct induction, that $w_i=w_i^\prime$. Furthermore, since $\mathbb{B}^{kN}$
is finite, each $\varphi_y$ is bijective. Therefore, and using (\ref{PCH-1}),
one has
+$\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(\varphi_y(U_{kN}))=1]$ and,
+therefore,
\begin{equation}\label{PCH-2}
-\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(\varphi_y(U_{kN}))=1]=\mathrm{Pr}[D(U_{kN})=1].
+\mathrm{Pr}[D^\prime(U_{kN})=1]=\mathrm{Pr}[D(U_{kN})=1].
\end{equation}
Now, using (\ref{PCH-1}) again, one has for every $x$,
\end{equation}
where $y$ is randomly generated. By construction, $\varphi_y(H(x))=X(yx)$,
thus
-\begin{equation}\label{PCH-3}
+\begin{equation}%\label{PCH-3} %%RAPH : j'ai viré ce label qui existe déjà, il est 3 ligne avant
D^\prime(H(x))=D(yx),
\end{equation}
where $y$ is randomly generated.
\mathrm{Pr}[D^\prime(H(U_{N}))=1]=\mathrm{Pr}[D(U_{2N})=1].
\end{equation}
From (\ref{PCH-2}) and (\ref{PCH-4}), one can deduce that
-there exist a polynomial time probabilistic
+there exists a polynomial time probabilistic
algorithm $D^\prime$, a positive polynomial $p$, such that for all $k_0$ there exists
$N\geq \frac{k_0}{2}$ satisfying
$$| \mathrm{Pr}[D(H(U_{N}))=1]-\mathrm{Pr}[D(U_{kN}=1]|\geq \frac{1}{p(2N)},$$
-proving that $H$ is not secure, a contradiction.
+proving that $H$ is not secure, which is a contradiction.
\end{proof}
The modulus operation is the most time consuming operation for current
GPU cards. So in order to obtain quite reasonable performances, it is
-required to use only modulus on 32 bits integer numbers. Consequently
+required to use only modulus on 32-bits integer numbers. Consequently
$x_n^2$ need to be lesser than $2^{32}$, and thus the number $M$ must be
lesser than $2^{16}$. So in practice we can choose prime numbers around
-256 that are congruent to 3 modulus 4. With 32 bits numbers, only the
+256 that are congruent to 3 modulus 4. With 32-bits numbers, only the
4 least significant bits of $x_n$ can be chosen (the maximum number of
indistinguishable bits is lesser than or equals to
-$log_2(log_2(M))$). In other words, to generate a 32 bits number, we need to use
+$log_2(log_2(M))$). In other words, to generate a 32-bits number, we need to use
8 times the BBS algorithm with possibly different combinations of $M$. This
-approach is not sufficient to be able to pass all the TestU01,
+approach is not sufficient to be able to pass all the tests of TestU01,
as small values of $M$ for the BBS lead to
- small periods. So, in order to add randomness we proceed with
+ small periods. So, in order to add randomness we have proceeded with
the followings modifications.
\begin{itemize}
\item
Firstly, we define 16 arrangement arrays instead of 2 (as described in
Algorithm \ref{algo:gpu_kernel2}), but only 2 of them are used at each call of
-the PRNG kernels. In practice, the selection of combinations
+the PRNG kernels. In practice, the selection of combination
arrays to be used is different for all the threads. It is determined
by using the three last bits of two internal variables used by BBS.
%This approach adds more randomness.
In Algorithm~\ref{algo:bbs_gpu},
character \& is for the bitwise AND. Thus using \&7 with a number
-gives the last 3 bits, providing so a number between 0 and 7.
+gives the last 3 bits, thus providing a number between 0 and 7.
\item
Secondly, after the generation of the 8 BBS numbers for each thread, we
-have a 32 bits number whose period is possibly quite small. So
+have a 32-bits number whose period is possibly quite small. So
to add randomness, we generate 4 more BBS numbers to
-shift the 32 bits numbers, and add up to 6 new bits. This improvement is
+shift the 32-bits numbers, and add up to 6 new bits. This improvement is
described in Algorithm~\ref{algo:bbs_gpu}. In practice, the last 2 bits
of the first new BBS number are used to make a left shift of at most
-3 bits. The last 3 bits of the second new BBS number are add to the
+3 bits. The last 3 bits of the second new BBS number are added to the
strategy whatever the value of the first left shift. The third and the
fourth new BBS numbers are used similarly to apply a new left shift
and add 3 new bits.
\end{itemize}
\begin{algorithm}
-
+\begin{small}
\KwIn{InternalVarBBSArray: array with internal variables of the 8 BBS
in global memory\;
NumThreads: Number of threads\;
-tab: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\;}
+array\_comb: 2D Arrays containing 16 combinations (in first dimension) of size combination\_size (in second dimension)\;
+array\_shift[4]=\{0,1,3,7\}\;
+}
\KwOut{NewNb: array containing random numbers in global memory}
\If{threadId is concerned} {
retrieve data from InternalVarBBSArray[threadId] in local variables including shared memory and x\;
we consider that bbs1 ... bbs8 represent the internal states of the 8 BBS numbers\;
offset = threadIdx\%combination\_size\;
- o1 = threadIdx-offset+tab[bbs1\&7][offset]\;
- o2 = threadIdx-offset+tab[8+bbs2\&7][offset]\;
+ o1 = threadIdx-offset+array\_comb[bbs1\&7][offset]\;
+ o2 = threadIdx-offset+array\_comb[8+bbs2\&7][offset]\;
\For{i=1 to n} {
- t<<=4\;
+ t$<<$=4\;
t|=BBS1(bbs1)\&15\;
...\;
- t<<=4\;
+ t$<<$=4\;
t|=BBS8(bbs8)\&15\;
- //two new shifts\;
- t<<=BBS3(bbs3)\&3\;
- t|=BBS1(bbs1)\&7\;
- t<<=BBS7(bbs7)\&3\;
- t|=BBS2(bbs2)\&7\;
- t=t $\wedge$ shmem[o1] $\wedge$ shmem[o2]\;
+ \tcp{two new shifts}
+ shift=BBS3(bbs3)\&3\;
+ t$<<$=shift\;
+ t|=BBS1(bbs1)\&array\_shift[shift]\;
+ shift=BBS7(bbs7)\&3\;
+ t$<<$=shift\;
+ t|=BBS2(bbs2)\&array\_shift[shift]\;
+ t=t\textasciicircum shmem[o1]\textasciicircum shmem[o2]\;
shared\_mem[threadId]=t\;
- x = x $\wedge$ t\;
+ x = x\textasciicircum t\;
store the new PRNG in NewNb[NumThreads*threadId+i]\;
}
store internal variables in InternalVarXorLikeArray[threadId] using a rotation\;
}
-
+\end{small}
\caption{main kernel for the BBS based PRNG GPU}
\label{algo:bbs_gpu}
\end{algorithm}
-In Algorithm~\ref{algo:bbs_gpu}, $n$ is for the quantity
-of random numbers that a thread has to generate.
-The operation t<<=4 performs a left shift of 4 bits
-on the variable $t$ and stores the result in $t$, and
-$BBS1(bbs1)\&15$ selects
-the last four bits of the result of $BBS1$.
-Thus an operation of the form $t<<=4; t|=BBS1(bbs1)\&15\;$
-realizes in $t$ a left shift of 4 bits, and then puts
-the 4 last bits of $BBS1(bbs1)$ in the four last
-positions of $t$.
-Let us remark that to initialize $t$ is not a necessity as we
-fill it 4 bits by 4 bits, until having obtained 32 bits.
-The two last new shifts are realized in order to enlarge
-the small periods of the BBS used here, to introduce a variability.
-In these operations, we make twice a left shift of $t$ of \emph{at most}
-3 bits and we put \emph{exactly} the 3 last bits from a BBS into
-the 3 last bits of $t$, leading possibly to a loss of a few
-bits of $t$.
-
-It should be noticed that this generator has another time the form $x^{n+1} = x^n \oplus S^n$,
+In Algorithm~\ref{algo:bbs_gpu}, $n$ is for the quantity of random numbers that
+a thread has to generate. The operation t<<=4 performs a left shift of 4 bits
+on the variable $t$ and stores the result in $t$, and $BBS1(bbs1)\&15$ selects
+the last four bits of the result of $BBS1$. Thus an operation of the form
+$t<<=4; t|=BBS1(bbs1)\&15\;$ realizes in $t$ a left shift of 4 bits, and then
+puts the 4 last bits of $BBS1(bbs1)$ in the four last positions of $t$. Let us
+remark that the initialization $t$ is not a necessity as we fill it 4 bits by 4
+bits, until having obtained 32-bits. The two last new shifts are realized in
+order to enlarge the small periods of the BBS used here, to introduce a kind of
+variability. In these operations, we make twice a left shift of $t$ of \emph{at
+ most} 3 bits, represented by \texttt{shift} in the algorithm, and we put
+\emph{exactly} the \texttt{shift} last bits from a BBS into the \texttt{shift}
+last bits of $t$. For this, an array named \texttt{array\_shift}, containing the
+correspondence between the shift and the number obtained with \texttt{shift} 1
+to make the \texttt{and} operation is used. For example, with a left shift of 0,
+we make an and operation with 0, with a left shift of 3, we make an and
+operation with 7 (represented by 111 in binary mode).
+
+It should be noticed that this generator has once more the form $x^{n+1} = x^n \oplus S^n$,
where $S^n$ is referred in this algorithm as $t$: each iteration of this
-PRNG ends with $x = x \wedge t;$. This $S^n$ is only constituted
+PRNG ends with $x = x \wedge t$. This $S^n$ is only constituted
by secure bits produced by the BBS generator, and thus, due to
Proposition~\ref{cryptopreuve}, the resulted PRNG is cryptographically
-secure
+secure.
\subsection{Toward a Cryptographically Secure and Chaotic Asymmetric Cryptosystem}
-
+\label{Blum-Goldwasser}
We finish this research work by giving some thoughts about the use of
the proposed PRNG in an asymmetric cryptosystem.
This first approach will be further investigated in a future work.
\item $i=0$.
\item While $i \leqslant L-1$:
\begin{itemize}
-\item Set $b_i$ equal to the least-significant\footnote{BBS can securely output up to $\mathsf{N} = \lfloor log(log(N)) \rfloor$ of the least-significant bits of $x_i$ during each round.} bit of $x_i$,
+\item Set $b_i$ equal to the least-significant\footnote{As signaled previously, BBS can securely output up to $\mathsf{N} = \lfloor log(log(N)) \rfloor$ of the least-significant bits of $x_i$ during each round.} bit of $x_i$,
\item $i=i+1$,
\item $x_i = (x_{i-1})^2~mod~N.$
\end{itemize}
\item Using the secret key $(p,q)$, she computes $r_p = y^{((p+1)/4)^{L}}~mod~p$ and $r_q = y^{((q+1)/4)^{L}}~mod~q$.
\item The initial seed can be obtained using the following procedure: $x_0=q(q^{-1}~{mod}~p)r_p + p(p^{-1}~{mod}~q)r_q~{mod}~N$.
\item She recomputes the bit-vector $b$ by using BBS and $x_0$.
-\item Alice computes finally the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$.
+\item Alice finally computes the plaintext by XORing the keystream with the ciphertext: $ m = c \oplus b$.
\end{enumerate}
her new public key will be $(S^0, N)$.
To encrypt his message, Bob will compute
-\begin{equation}
-c = \left(m_0 \oplus (b_0 \oplus S^0), m_1 \oplus (b_0 \oplus b_1 \oplus S^0), \hdots, m_{L-1} \oplus (b_0 \oplus b_1 \hdots \oplus b_{L-1} \oplus S^0) \right)
-\end{equation}
+%%RAPH : ici, j'ai mis un simple $
+%\begin{equation}
+$c = \left(m_0 \oplus (b_0 \oplus S^0), m_1 \oplus (b_0 \oplus b_1 \oplus S^0), \hdots, \right.$
+$ \left. m_{L-1} \oplus (b_0 \oplus b_1 \hdots \oplus b_{L-1} \oplus S^0) \right)$
+%%\end{equation}
instead of $\left(m_0 \oplus b_0, m_1 \oplus b_1, \hdots, m_{L-1} \oplus b_{L-1} \right)$.
The same decryption stage as in Blum-Goldwasser leads to the sequence
$\left(m_0 \oplus S^0, m_1 \oplus S^0, \hdots, m_{L-1} \oplus S^0 \right)$.
-Thus, with a simple use of $S^0$, Alice can obtained the plaintext.
+Thus, with a simple use of $S^0$, Alice can obtain the plaintext.
By doing so, the proposed generator is used in place of BBS, leading to
the inheritance of all the properties presented in this paper.
\section{Conclusion}
-In this paper we have presented a new class of PRNGs based on chaotic
-iterations. We have proven that these PRNGs are chaotic in the sense of Devaney.
-We also propose a PRNG cryptographically secure and its implementation on GPU.
-
-An efficient implementation on GPU based on a xor-like PRNG allows us to
-generate a huge number of pseudorandom numbers per second (about
-20Gsamples/s). This PRNG succeeds to pass the hardest batteries of TestU01.
-
-In future work we plan to extend this work for parallel PRNG for clusters or
-grid computing.
+In this paper, a formerly proposed PRNG based on chaotic iterations
+has been generalized to improve its speed. It has been proven to be
+chaotic according to Devaney.
+Efficient implementations on GPU using xor-like PRNGs as input generators
+have shown that a very large quantity of pseudorandom numbers can be generated per second (about
+20Gsamples/s), and that these proposed PRNGs succeed to pass the hardest battery in TestU01,
+namely the BigCrush.
+Furthermore, we have shown that when the inputted generator is cryptographically
+secure, then it is the case too for the PRNG we propose, thus leading to
+the possibility to develop fast and secure PRNGs using the GPU architecture.
+Thoughts about an improvement of the Blum-Goldwasser cryptosystem, using the
+proposed method, has been finally proposed.
+
+In future work we plan to extend these researches, building a parallel PRNG for clusters or
+grid computing. Topological properties of the various proposed generators will be investigated,
+and the use of other categories of PRNGs as input will be studied too. The improvement
+of Blum-Goldwasser will be deepened. Finally, we
+will try to enlarge the quantity of pseudorandom numbers generated per second either
+in a simulation context or in a cryptographic one.