1 \chapter{The family of CIs PRNG}
2 \label{ThefamilyofCIPRNG}
4 \section{Mapping matrix}
6 Chaotic iterations introduced above can be described by using the mapping matrix defined bellow.
9 Let $f:\mathds{B}^{\mathsf{N}}\longrightarrow \mathds{B}^{\mathsf{N}}$
10 be an iteration function, then its associated \emph{mapping matrix}
11 $\mathsf{f}$ is the matrix of size $\mathsf{N} \times 2^\mathsf{N}$ whose element $\mathsf{f}_{p,q}$ is the integer having the following binary decomposition: $q_\mathsf{N}, \hdots, q_{\mathsf{N}-p}, f(q)_{\mathsf{N}-p+1}, q_{\mathsf{N}-p+2}, \hdots, q_1 $, where $q_i$ (resp. $f(q)_i$) is the $i-$th binary digit of $q$ (resp. of $f(q)$).
12 %$x_1, \hdots, x_\mathsf{N}$, where $x_i$ that is:
14 %\item $x_i$ is the $p-$th binary digit of $f(q)$, if $i=p$,
15 %\item the $i-$th binary digit of $q$, else.
20 %f(0)_1x_2... x_\mathsf{N} &f(1)_1x_2...x_\mathsf{N} &\ldots &f(2^\mathsf{N})_1 x_2...x_\mathsf{N} \\
21 %x_1f(0)_2...x_\mathsf{N} &x_1f(1)_2...x_\mathsf{N} &\ldots &x_1f(2^\mathsf{N})_2...x_\mathsf{N} \\
22 %\vdots &\vdots &\ddots~&\vdots~~~ ~~~~\\
23 %x_1x_2... f(0)_\mathsf{N} &x_1x_2... f(1)_\mathsf{N} &\ldots &x_1x_2...
24 %f(2^\mathsf{N})_\mathsf{N} \\
28 %where the value that lies in the $p$-th row and the $q$-th column of the
29 %matrix $\mathsf{f}$ is referred to $\mathsf{f}_{p,q}$.
32 The relation between $\mathsf{f}$ and chaotic iterations of $f$ can be
33 understood as follows. If the current state of the system is $q$ and the
34 strategy is $p$, then the next state (under the chaotic iterations of $f
35 $) will be $\mathsf{f}_{p,q}$. Finally, the vector $\mathcal{F}(f)=(f(0),f(1),\ldots,f(2^{\mathsf{N}}-1)) \in \llbracket 0 ; 2^{\mathsf{N}}-1 \rrbracket^{2^{\mathsf{N}}}$ is called \emph{vector of images}.
36 An example is shown for the vectorial Boolean negation $f_0 (x_1, \hdots, x_\mathsf{N}) = (\overline{x_1}, \hdots, \overline{x_\mathsf{N}})$ in Table~
37 \ref{negation_output}.
39 \renewcommand{\arraystretch}{1.2}
40 \caption{The matrix $\mathsf{f}$ associated to $f_0$}
41 \label{negation_output}
43 \begin{tabular}{c|c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c}%\hline
44 \backslashbox{p}{q} &~~0 & ~~1~~ & ~~2~~ & ~~3~~ & ~~4~~ & ~~5~~& ~~6~~ & ~~7~~ &~~ 8~~ & ~~9~~ & ~~10~~ & ~~11~~
45 & ~~12~~ & ~~13~~ & ~~14~~ &15 \\ \hline
46 $(f_0(q)_1,q_2,q_3,q_4)$ &\multicolumn{1}{@{}r@{}}{\multirow{4}*{$\left(
47 \begin{array}{@{}c@{}}
53 \right.$}}&9&10&11&12&13&14&15&0&1&2&3&4&5&6&
54 \multicolumn{1}{@{}l@{}}{\multirow{4}*{$\left.
55 \begin{array}{@{}c@{}}
63 $(q_1,f_0(q)_2,q_3,q_4)$&&5&6&7&0&1&2&3&12&13&14&15&8&9&10& \\
64 $(q_1,q_2,f_0(q)_3,q_4)$&&3&0&1&6&7&4&5&10&11&8&9&14&15&12& \\
65 $(q_1,q_2,q_3,f_0(q)_4)$& &0&3&2&5&4&7&6&9&8&11&10&13&12&15&\\\hline
66 $\mathcal{F}(f_0)$ &(15,&14,&13,&12,&11,&10,&9,&8,&7,&6,&5,&4,&3,&2,&1,&0)\\
70 \section{Characterizing and Computing Functions for PRNG}\label{sec:instantiating}
71 This section presents other functions that theoretically could replace the
72 negation function $\neg$ in the previous algorithms.
74 In this algorithm and from the graph point of view,
75 iterating the function $G_f$ from a configuration $x^0$
76 and according to a strategy $(S^t)^{t \in \mathds{N}}$
77 consists in traversing the directed iteration graph $\Gamma(f)$
78 from a vertex $x^0$ following the edge labelled with $S^0$, $S^1$, \ldots
79 Obviously, if some vertices cannot be reached from other ones,
80 their labels expressed as numbers cannot be output by the generator.
81 The \emph{Strongly connected component of $\Gamma(f)$}
82 (\textit{i.e.}, when there is a path from each vertex to every other one),
83 denoted by SCC in the following~\cite{ita09},
84 is then a necessary condition for the function $f$.
85 The following result shows this condition is sufficient to make
86 iterations of $G_f$ chaotic.
88 \begin{theorem}[Theorem III.6, p. 91 in~\cite{GuyeuxThese10}]
90 Let $f$ be a function from $\mathds{B}^{n}$ to $\mathds{B}^{n}$. Then
91 $G_f$ is chaotic according to Devaney iff the graph
92 $\Gamma(f)$ is strongly connected.
95 Any function such that the graph $\Gamma(f)$ is strongly connected
96 is then a candidate for being iterated in $G_f$
97 for pseudo random number generating.
98 Thus, let us show how to compute a map $f$
99 with a strongly connected graph of iterations $\Gamma(f)$.
101 We first consider the negation function $\neg$. The iteration graph
102 $\Gamma(\neg)$ is obviously strongly connected:
103 since each configuration $(x_1,\ldots, x_n)$ may reach one of its $n$ neighbors,
104 there is then a bit by bit path from any
105 $(x_1,\ldots, x_n)$ to any $(x'_1,\ldots, x'_n)$.
106 Let then $\Gamma$ be a graph, initialized with $\Gamma(\neg)$,
107 the algorithm iteratively does the two following stages:
109 \item select randomly an edge of the current iteration graph $\Gamma$ and
110 \item check whether the current iteration graph without that edge
111 remains strongly connected (by a Tarjan algorithm~\cite{Tarjanscc72}, for instance). In the positive case the edge is removed from $G$,
113 until a rate $r$ of removed edges is greater
114 than a threshold given by the user.
116 Formally, if $r$ is close to $0\%$ (\textit{i.e.}, few edges are removed),
117 there should remain about $n\times 2^n$ edges (let us recall that $2^n$ is the amount of nodes).
118 In the opposite case, if $r$ is close to $100\%$, there are left about $2^n$
120 In all the cases, this step returns the last graph $\Gamma$
121 that is strongly connected.
122 It is not then obvious to return the function $f$ whose iteration
125 However, such an approach suffers from generating many functions with similar
126 behavior due to the similarity of their graph. More formally, let us recall
127 the graph isomorphism definition that resolves this issue.
128 Two directed graphs $\Gamma_1$ and $\Gamma_2$
129 are \emph{isomorphic}
130 if there exists a permutation $p$ from the vertices of
131 $\Gamma_1$ to the vertices of $\Gamma_2$ such that
132 there is an arc from vertex $u$ to vertex $v$ in $\Gamma_1$
133 iff there is an arc from vertex $p(u)$ to vertex $p(v)$ in
137 Then, let $f$ be a function, $\Gamma(f)$
138 be its iteration graph, and $p$
139 be a permutation of vertices of $\Gamma(f)$.
140 Since $p(\Gamma(f))$ and $\Gamma(f)$ are isomorphic,
142 (\textit{i.e.}, traversing $\Gamma(f)$) from the initial configuration $c$
143 amounts to iterating the function whose iteration graph is $p(\Gamma(f))$
144 from the configuration $p(c)$.
145 Graph isomorphism being an equivalence relation, the sequel only
146 consider the quotient set of functions with this relation over their graph.
147 In other words, two functions are distinct if and only if their iteration
148 graph are not isomorphic.
154 \begin{tabular}{|c|c|c|}
156 Function $f$ & $f(x)$, for $x$ in $(0,1,2,\hdots,15)$ & Rate\\
158 $\neg$&(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0)&0\%\\
160 $\textcircled{a}$&(15,14,13,12,11,10,9,8,7,6,7,4,3,2,1,0)&2.1\%\\
162 $\textcircled{b}$&(14,15,13,12,11,10,9,8,7,6,5,4,3,2,1,0)&4.1\%\\
164 $\textcircled{c}$&(15,14,13,12,11,10,9,8,7,7,5,12,3,0,1,0)&6.25\%\\
166 $\textcircled{d}$&(14,15,13,12,9,10,11,0,7,2,5,4,3,6,1,8)&16.7\%\\
168 $\textcircled{e}$&(11,2,13,12,11,14,9,8,7,14,5,4,1,2,1,9)&16.7\%\\
170 $\textcircled{f}$&(13,10,15,12,3,14,9,8,6,7,4,5,11,2,1,0)&20.9\%\\
172 $\textcircled{g}$&(13,7,13,10,11,10,1,10,7,14,4,4,2,2,1,0)&20.9\%\\
174 $\textcircled{h}$&(7,12,14,12,11,4,1,13,4,4,15,6,8,3,15,2)&50\%\\
176 $\textcircled{i}$&(12,0,6,4,14,15,7,15,11,1,14,2,7,4,7,9)&75\%\\
179 \caption{Functions with SCC graph of iterations\label{table:nc}}
182 Table~\ref{table:nc} presents generated functions
183 that have been ordered by the rate of removed edges in their
184 graph of iterations compared to the iteration graph $\Gamma(\neg)$
185 of the boolean negation function $\neg$.
187 For instance let us consider the function $\textcircled{g}$ from $\mathds{B}^4$ to $\mathds{B}^4$
188 defined by the following images:
189 $[13,7,13,10,11,10,1,10,7,14,4,4,2,2,1,0]$.
190 In other words, the image of $3 ~(0011)$ by $\textcircled{g}$ is $10 ~(1010)$: it is obtained
191 as the binary value of the fourth element in the second list
192 (namely~10). It is not hard to verify that $\Gamma(\textcircled{d})$ is SCC.
193 Next section gives practical evaluations of these functions.
195 \section{Modifying the PRNG Algorithm}\label{sec:modif}
196 A coarse attempt could directly embed each function of table~\ref{table:nc}
197 in the $\textit{iterate\_G}$ function defined in Algorithm~\ref{algo:it}.
201 \textbf{Input:} a function $f$, a PRNG $r$,
202 an iterations number $k$, a binary number $x^0$ ($n$ bits)\\
203 \textbf{Output:} a binary number $x$ ($n$ bits)
204 \begin{algorithmic}[1]
205 \STATE$x\leftarrow x^0$\;
206 \STATE S = $\textit{sample}(r,k,n)$\;
207 \FOR{$i=0,\dots,k-1$}
209 \STATE$s \leftarrow S[i]$\;
210 \STATE $x\leftarrow F_f(s,x) $\;
214 \caption{The \textit{iterate\_G} function. }
219 Let us show the drawbacks of this approach on a more simpler example.
221 Let us consider for instance $n$ is two, the negation function on $\mathds{B}^2$, and
222 the function $f$ defined by the list $[1,3,0,2]$ (i.e., $f(0,0) = (0,1), f(0,1) = (1,1), f(1,0) = (0,0),$ and $f(1,1)=(1,0)$) whose iterations graphs are represented
223 in Fig.~\ref{fig:xplgraph}.
224 The two graphs are strongly connected and thus the vectorial negation function
225 should theoretically be replaced by the function $f$.
231 \subfigure[Negation]{
232 \includegraphics[width=1.33cm]{images/neg2.eps}
235 \subfigure[$(1,3,0,2)$]{
236 \includegraphics[width=1.62cm]{images/f2.eps}
239 \caption{Graphs of Iterations}
243 %In what follows, we suppose that
244 In the graph of iterations $\Gamma({\neg})$ (Fig.~\ref{fig:comp:n}),
245 let us compute the probability $P^t_{\neg}(X)$ to reach the node $X$ in $t$ iterations
246 from the node 00. Let $X_0$, $X_1$, $X_2$, $X_3$ be the nodes
247 $00$, $01$, $10$ and $11$.
248 For $i\in \llbracket 0,3 \rrbracket$, $P^1_{\neg}(X_i)$, are respectively equal to
249 0.0, 0.5, 0.0, 0.5. In two iterations $P^2_{\neg}(X_i)$
250 are 0.5, 0.0, 0.5, 0.0.
251 It is obvious to establish that we have
252 $P^{2t}(X_i) = P^{0}(X_i)$ and $P^{2t+1}(X_i) = P^{1}(X_i)$ for any $t\in \mathds{N}$.
253 Then in $k$ or $k+1$ iterations all these probabilities are equal to 0.25.
255 Let us apply a similar reasoning for the function $f$ defined by $[1,3,0,2]$.
256 In its iterations graph $\Gamma(f)$ (Fig.~\ref{fig:comp:f}),
257 and with $X_i$ defined as above,
258 the probabilities $P^1_{f}(X_i)$ to reach the node $X_i$
259 in one iteration from the node 00
260 are respectively equal to
262 Next, probabilities $P^2_{f}(X)$ are 0.25, 0.5, 0.25, 0.0.
263 Next, $P^3_{f}(X)$ are 0.125, 0.375, 0.375, 0.125.
264 For each iteration, we compute the average deviation rate $R^t$
265 with 0.25 as follows.
267 R^t= \dfrac{ \Sigma_{i=0}^3 \mid P^t_{f}(X_i)-0.25 \mid}
270 The higher is this rate, the less the generator may uniformly reach any $X_i$ from $00$.
271 For this example, it is necessary to iterate 14 times in order to
272 observe a deviation from 0.25 less
274 A similar reasoning has been applied for all the functions listed in Table~\ref{table:nc}.
275 The table~\ref{tab:dev} summarizes their deviations with uniform distribution and gives the
276 smallest iterations number the smallest deviation has been obtained.
280 \begin{tabular}{|c|r|r|}
282 Name & Deviation & Suff. number of it. \\
285 $\textcircled{a}$ & 8.1\% & 167 \\
287 $\textcircled{b}$ & 1\% & 105 \\
289 $\textcircled{c}$ & 18\% & 58 \\
291 $\textcircled{d}$ & 1\% & 22 \\
293 $\textcircled{e}$ & 24\% & 19 \\
295 $\textcircled{f}$ & 1\% & 14 \\
297 $\textcircled{g}$ & 20\% & 6 \\
299 $\textcircled{h}$ & 45.3\% & 7 \\
301 $\textcircled{i}$ & 53.2\%& 14 \\
304 \caption{Deviation with Uniform Distribution \label{tab:dev}}
310 With that material we present in Algorithm~\ref{CIs Algorithm}
311 the method that allows to take any chaotic function as
313 Among the parameters, it takes the number $b$ of minimal iterations
314 that have to be executed to get a uniform like
315 distribution. For our experiments $b$ is set with the value
316 given in the third column of Table~\ref{tab:dev}.
318 \textbf{Input:} a function $f$, an iteration number $b$, an initial state $x^0$ ($n$ bits)\\
319 \textbf{Output:} a state $x$ ($n$ bits)
320 \begin{algorithmic}[1]
321 \STATE$x\leftarrow x^0$\;
322 \STATE$k\leftarrow b + (\textit{XORshift}() \mod 2)$\;
323 \FOR{$i=0,\dots,k-1$}
325 \STATE$s\leftarrow{\textit{XORshift}() \mod n}$\;
326 \STATE$x\leftarrow{F_f(s,x)}$\;
330 \caption{modified PRNG with various functions}
331 \label{CIs Algorithm}
336 % \KwIn{a function $f$, an iteration number $b$, an initial state $x^0$ ($n$ bits)}
337 % \KwOut{a state $x$ ($n$ bits)}
338 % $x\leftarrow x^0$\;
339 % $k\leftarrow b + (\textit{XORshift}() \mod 2)$\;
340 % \For{$i=0,\dots,k-1$}
342 % $s\leftarrow{\textit{XORshift}() \mod n}$\;
343 % $x\leftarrow{F_f(s,x)}$\;
347 % \caption{modified PRNG with various functions}
348 % \label{CIs Algorithm}
352 Compared to the algorithm~\ref{Chaotic iteration1}
353 parameters of this one are
354 the function $f$ to embed and
355 the smallest number of time steps $G_f$ is iterated.
356 First, the number of iterations is either $b$ or $b+1$ depending on the
357 value of the \textit{XORshift} output (if the next value .
358 Next, a loop that iterates $G_f$ is executed.
360 In this example, $n$ and $b$ are equal to $4$ for easy understanding.
361 The initial state of the system $x^0$ can be seeded by the decimal part of the current time.
362 For example, the current time in seconds since the Epoch is 1237632934.484088,
363 so $t = 484088$. $x^0 = t \mod 16 $ in binary digits, then $x^0 = 0100$.
364 $m$ and $S$ can now be computed from \textit{XORshift}.
366 \item $f$ = [14,15,13,12,11,10,9,8,7,6,5,4,3,2,1,0]
367 \item $k$ = 4, 5, 4,\ldots
368 \item $s$ = 2, 4, 2, 3, , 4, 1, 1, 4, 2, , 0, 2, 3, 1,\ldots
370 Chaotic iterations are done with initial state $x^0$,
371 the mapping function $f$, and strategy $s^1$, $s^2$\ldots
372 The result is presented in Table \ref{table application example}.
373 Let us recall that sequence $k$ gives the states $x^t$ to return: $x^4, x^{4+5}, x^{4+5+4}$\ldots
374 Successive stages are detailed in Table~\ref{table application example}.
377 %\renewcommand{\arraystretch}{1.3}
379 \begin{tabular}{|c|c@{}c@{}c@{}c@{}c|c@{}c@{}c@{}c@{}c@{}c|c@{}c@{}c@{}c@{}c|}
381 $k$ & \multicolumn{5}{|c|}{4} & \multicolumn{6}{|c|}{5} & \multicolumn{5}{|c|}{4}\\
383 $s$ & 2 & 4 & 2 & 3 & & 4 & 1 & 1 & 4 & 2 & & 0 & 2 & 3 & 1 & \\ \hline
384 % [14,15,13,12,11,10,9,8,7,6,5,4,3,2,1,0]
385 &$f(4)$&$f(0)$&$f(0)$&$f(4)$ & &$f(6)$ &$f(7)$ &$f(15)$ &$f(7)$ &$f(7)$ &
386 &$f(2)$ &$f(0)$ & $f(4)$& $f(6)$& \\
390 & 1& \textbf{1} & \textbf{0} &1 &1 &
391 &1 & 1& 1&\textbf{1} &
394 & \textbf{0} & 1& \textbf{1} & 0 &
395 &0 &0& 0&0 & \textbf{0}&
396 &1 &\textbf{1} & 0&0 & \\
398 &1 & 1& 1& \textbf{1}&
400 & \textbf{0} & 1& \textbf{1} & 0 &
403 &1 &\textbf{0} &0 &1 &
404 &\textbf{1} &0 &0 &\textbf{0} &0 &
407 $x^{0}$ & & & & & $x^{4}$ & & & & & & $x^{9}$ & & & & & $x^{13}$ \\
408 4 &0 &0 &4 &6&6 &7 &15 &7 &7 &7 &2&0 &4 &6 &14 &14 \\
409 & & & & & && & & & & & & & & & \\
412 0 & & $\xrightarrow{1} 1$ & $\xrightarrow{1} 0$ & & &
413 0 & & & & $\xrightarrow{1} 1$ &
416 1 & $\xrightarrow{2} 0$ & & $\xrightarrow{2} 1$ & &
417 1 & & & & & $\xrightarrow{2} 0$ &
418 0 & & $\xrightarrow{2} 1$ & & &
421 0 & & & & $\xrightarrow{3} 1$ &
423 1 & $\xrightarrow{3} 0$ & & $\xrightarrow{3} 1$ & &
426 0 & & $\xrightarrow{4} 0$ & & &
427 0 &$\xrightarrow{4} 1$ & & & $\xrightarrow{4} 0$& &
432 % Binary Output: $x_1^{0}x_2^{0}x_3^{0}x_4^{0}x_5^{0}x_1^{4}x_2^{4}x_3^{4}x_4^{4}x_5^{4}x_1^{9}x_2^{9}x_3^{9}x_4^{9}x_5^{9}x_1^{13}x_2^{13}... = 0100011001110001...$
434 % $x^{0},x^{0},x^{4},x^{6},x^{8}... = 6,7,1...$
435 \caption{Application example}
436 \label{table application example}
439 % So, in this example, the generated binary digits are: 0100011001110001... Or the integers are: 6, 2, 14\ldots
444 To illustrate the deviation, Figures~\ref{fig:f5} and~\ref{fig:f6} represent
445 the simulation outputs of 5120 executions with $b$ equal to $40$
446 for $\textcircled{e}$ and $\textcircled{f}$ respectively.
447 In these two figures, the point $(x,y,z)$ can be understood as follows.
448 $z$ is the number of times the value $x$ has been succedded by the value $y$ in the
449 considered generator.
450 These two figures explicitly confirm that outputs of functions $\textcircled{f}$ are
451 more uniform that these of the function $\textcircled{e}$.
452 In the former each number $x$ reaches about 20 times each number $y$ whereas
453 in the latter, results vary from 10 to more that 50.
457 \subfigure[Function $\textcircled{e}$]{
458 \includegraphics[width=10cm]{images/f5.eps}
462 \subfigure[Function $\textcircled{f}$]{
463 \includegraphics[width=10cm]{images/f6.eps}
467 \caption{Repartition of function outputs.} \label{fig:fs}
473 \section{Experiments}
476 \begin{sidewaystable}
477 \renewcommand{\arraystretch}{1.3}
478 \caption{NIST SP 800-22 test results ($\mathbb{P}_T$)}
479 \label{The passing rate3}
481 \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|}
483 Method &$\textcircled{a}$& $\textcircled{b}$ & $\textcircled{c}$ & $\textcircled{d}$ & $\textcircled{e}$ & $\textcircled{f}$ & $\textcircled{g}$ & $\textcircled{h}$ & $\textcircled{i}$\\ \hline\hline
485 %$w^{j}$ & $\{1,..,8\}$ & $\{1,..,8\}$ & $\{1,..,8\}$ & $\{1,..,5\}$ & $\{1,..,5\}$ &$\{1,..,5\}$ \\ \hline \hline
486 Frequency (Monobit) Test &0.00000 & 0.45593 & 0.00000 & 0.38382 & 0.00000 & 0.61630 & 0.00000 & 0.00000 & 0.00000 \\ \hline
487 Frequency Test within a Block &0.00000 & 0.55442 & 0.00000 & 0.03517 & 0.00000 & 0.73991 & 0.00000 & 0.00000 & 0.00000 \\ \hline
488 Cumulative Sums (Cusum) Test* &0.00000 & 0.56521 & 0.00000 & 0.19992 & 0.00000 & 0.70923 & 0.00000 & 0.00000 & 0.00000 \\ \hline
489 Runs Test &0.00000 & 0.59554 & 0.00000 & 0.14532 & 0.00000 & 0.24928 & 0.00000 & 0.00000 & 0.00000 \\ \hline
490 Test for the Longest Run of Ones in a Block &0.20226 & 0.17186 & 0.00000 & 0.38382 & 0.00000 & 0.40119 & 0.00000 & 0.00000 & 0.00000 \\ \hline
491 Binary Matrix Rank Test &0.63711 & 0.69931 & 0.05194 & 0.16260 & 0.79813 & 0.03292 & 0.85138 & 0.12962 & 0.07571 \\ \hline
492 Discrete Fourier Transform (Spectral) Test &0.00009 & 0.09657 & 0.00000 & 0.93571 & 0.00000 & 0.93571 & 0.00000 & 0.00000 & 0.00000 \\ \hline
493 Non-overlapping Template Matching Test* &0.12009 & 0.52365 & 0.05426 & 0.50382 & 0.02628 & 0.50326 & 0.06479 & 0.00854 & 0.00927 \\ \hline
494 Overlapping Template Matching Test &0.00000 & 0.73991 & 0.00000 & 0.55442 & 0.00000 & 0.45593 & 0.00000 & 0.00000 & 0.00000 \\ \hline
495 Universal Statistical Test &0.00000 & 0.71974 & 0.00000 & 0.77918 & 0.00000 & 0.47498 & 0.00000 & 0.00000 & 0.00000 \\ \hline
496 Approximate Entropy Test &0.00000 & 0.10252 & 0.00000 & 0.28966 & 0.00000 & 0.14532 & 0.00000 & 0.00000 & 0.00000\\ \hline
497 Random Excursions Test* &NaN & 0.58707 &NaN & 0.41184 &NaN & 0.25174 &NaN &NaN &NaN \\ \hline
498 Random Excursions Variant Test* &NaN & 0.32978 &NaN & 0.57832 &NaN & 0.31028 &NaN &NaN &NaN \\ \hline
499 Serial Test* (m=10) &0.11840 & 0.95107 & 0.01347 & 0.57271 & 0.00000 & 0.82837 & 0.00000 & 0.00000 & 0.00000 \\ \hline
500 Linear Complexity Test & 0.91141 & 0.43727 & 0.59554 & 0.43727 & 0.55442 & 0.43727 & 0.59554 & 0.69931 & 0.08558 \\ \hline
501 Success &5/15&15/15&4/15&15/15&3/15&15/15&3/15&3/15&3/15 \\ \hline
502 Computational time &66.0507&47.0466&32.6808&21.6940&20.5759&19.2052&16.4945&16.8846&19.0256\\ \hline
507 Table~\ref{The passing rate3} shows $\mathbb{P}_T$ of the sequences based on
508 discrete chaotic iterations using different ''iteration'' functions. If there
509 are at least two statistical values in a test, the test is marked with an
510 asterisk and the average value is computed to characterize the statistical
511 values. Here, NaN means a warning that test is not applicable because of an
512 insufficient number of cycles. Time (in seconds) is related to the duration
513 needed by each algorithm to generate a $10^8$ bits long sequence. The test has
514 been conducted using the same computer and compiler with the same optimization
515 settings for both algorithms, in order to make the test as fair as possible.
517 Firstly, the computational time in seconds has increased due to the
518 growth of the sufficient iteration numbers, as precised in Table~\ref{tab:dev}.
519 For instance, the fastest generator is $\textcircled{g}$ since each new
520 number generation only requires 6 iterations.
521 Next, concerning the NIST tests results,
522 best situations are given by $\textcircled{b}$, $\textcircled{d}$ and
523 $\textcircled{f}$. In the opposite, it can be observed that among the 15 tests,
524 less than 5 ones are a successful for other functions.
525 Thus, we can draw a conclusion that, $\textcircled{b}$, $\textcircled{d}$,
526 and $\textcircled{f}$ are qualified to be good PRNGs with chaotic property.
527 NIST tests results are not a surprise:
528 $\textcircled{b}$, $\textcircled{d}$, and $\textcircled{f}$ have indeed a deviation less than 1\% with
529 the uniform distribution as already precised in Table~\ref{tab:dev}.
530 The rate of removed edge in the graph $\Gamma(\neg)$ is then not a pertinent
531 criteria compared to the deviation with the uniform distribution property:
532 the function $\textcircled{a}$ whose graph $\Gamma(\textcircled{a})$ is $\Gamma(\neg)$ without the
533 edge $1010 \rightarrow 1000$ (\textit{i.e.}, with only one edge less than
534 $\Gamma(\neg)$) has dramatic results compared to the function
535 $\textcircled{f}$ with many edges less.
537 Let us then try to give a characterization of convenient function.
538 Thanks to a comparison with the other functions, we notice that
539 $\textcircled{b}$, $\textcircled{d}$, and $\textcircled{f}$ are composed of all the elements of
540 $\llbracket 0;15 \rrbracket$.
541 It means that $\textcircled{b}$, $\textcircled{d}$, and $\textcircled{f}$, and even the vectorial boolean
542 negation function are arrangements of
543 $\llbracket 0;2^n \rrbracket$ ($n=4$ in this article) into a
546 \section{Description of the selection scheme}
547 \label{section:description}
549 In this section is explained how the iteration function $f_0$ can be replaced without losing chaos and randomness.
551 \subsection{Strong connectivity and chaos}
554 Let $f:\mathds{B}^\mathsf{N} \rightarrow \mathds{B}^\mathsf{N}$. Its
555 {\emph{iteration graph}} $\Gamma(f)$ is the directed graph defined as follows.
556 The set of vertices is
557 $\mathds{B}^\mathsf{N}$, and $\forall x\in\mathds{B}^\mathsf{N}, \forall i\in \llbracket1;\mathsf{N}\rrbracket$,
558 $\Gamma(f)$ contains an arc labeled $i$ from $x = (x_1, \hdots, x_\mathsf{N})$ to $(x_1, \hdots, x_{i-1}, f(x)_i, x_{i+1}, \hdots, x_\mathsf{N})$.
559 We have proven in~\cite{GuyeuxThese10} that:
563 \label{IC_chaotiques}
564 The $CI_f(PRNG1,PRNG2)$ generator is chaotic according to Devaney if and only if the graph $\Gamma(f)$ is strongly connected.
567 Theorem \ref{IC_chaotiques} only focus on the topological chaos property.
568 However, it is possible to find chaotic sequences with bad statistical properties, in particular when the iteration function is unbalanced.
570 % \subsection{The balance property}
571 % \label{The Rule For Choosing Balance Mapping}
574 % \renewcommand{\arraystretch}{1.2}
575 % \caption{The ratio of 0 in binary data for different functions}
578 % \begin{tabular}{cccccccccccccc}
580 % iteration times & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 &13\\ \hline
581 % $\neg$ &0.500&0.500&0.500&0.500&0.500&0.500&0.500&0.500&0.500&0.500&0.500 &0.500&0.500\\ \hline
583 % $\textcircled{a}$ &0.496&0.494 &0.492 &0.491 &0.491 &0.490 &0.490 &0.490 &0.490 &0.490 &0.490 &0.490 &0.490\\ \hline
585 % $\textcircled{b}$ &0.500&0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500&0.500\\ \hline
587 % $\textcircled{c}$ &0.496&0.494 &0.492 &0.491 &0.490 &0.490 &0.490 &0.490 &0.489 &0.489 &0.489 &0.489 &0.489\\ \hline
589 % $\textcircled{d}$&0.500&0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500&0.500\\ \hline
591 % $\textcircled{e}$ &0.500&0.498 &0.497 &0.496 &0.496 &0.495 &0.495 &0.495 &0.495 &0.494 &0.494 &0.494 &0.494\\ \hline
593 % $\textcircled{f}$&0.500&0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500 &0.500&0.500\\ \hline
595 % $\textcircled{g}$ &0.508&0.513 &0.516 &0.519 &0.521 &0.522 &0.523 &0.523 &0.524 &0.524 &0.524 &0.524 &0.524\\ \hline
597 % $\textcircled{h}$ &0.492&0.487 &0.483 &0.479 &0.475 &0.473 &0.471 &0.470 &0.469 &0.469 &0.468 &0.468 &0.468\\ \hline
599 % $\textcircled{i}$&0.484 &0.473 &0.459 &0.446 &0.434 &0.423 &0.413 &0.405 &0.398 &0.392 &0.387 &0.383 &0.380\\ \hline
605 % By analyzing the sequences generated by the new functions which are randomly selected by using Graph with strongly connected components as a selection criterion, the authors have realized that the randomicity of
606 % some sequence are not very ideal.
608 % Let $\mathsf{N} = 4$, then the 10 functions are cited from ~\cite{}. Set initial values from 0 to 15, and at each iteration, every cell will be updated once to create one new input. The maps are Iterated 13 times. Table~\ref{ratio_0} shows the percentages of differences between zeros and ones for all output at each iterations. The balance property of illustrates that the sequence generated directly by
609 % $\neg$, $\textcircled{b}$,$\textcircled{d}$ and $\textcircled{f}$ fit strictly the uniform
610 % distribution. Therefore, effective selection criterion for chaotic iterate functions should be made to
611 % enhance the random statistical properties.
613 % Based on a large number of experiments, the authors propose the following method to select the chaotic
616 \subsection{Obtaining Balanced Maps}
617 \label{The generation of pseudo-random sequence}
619 % The design of the pseudo-random number generator based on discrete chaotic iterations is proposed
620 % in this section, its performance is evaluated in the next one.
621 %\subsection{The output distribution of negation mapping}
623 We now explain how to find balanced iterate functions.
626 Let $j \in \llbracket 1; 2^\mathsf{N} \rrbracket$ and $F = \mathcal{F}(f_0)$ be the (balanced) vectorial Boolean negation: $F_{j}=2^\mathsf{N}-j$.
628 If $F' = \mathcal{F}(f)$, a vector of images of a \emph{balanced} iterate function $f$, is such that its $j-$th component differs from $F_j$ by only its $i-th$ bit (starting from the right), then $F'_{2^\mathsf{N}-F'_j}=2^\mathsf{N}-j$.
632 As $F'_j$ only differs from $F_j$ by its $i-th$ bit, we have: $F'_j=F_j-F_j\&2^{i-1}+(j-1)\&2^{i-1}.$
633 %Therefore, the value output using the iteration function $F'$ for its $i-th$ bit in $j-th$ element is computed as:
634 Therefore, the value $\mathsf{f}'_{i,j}$ of the mapping matrix of $F'$ can be computed as follows:
638 \mathsf{f}'_{i,j}=j_\mathsf{N}j_{\mathsf{N}-1}...f(j)_i...j_1 \\
639 =(j-1)-(j-1)\&2^{i-1}+F'_j\&2^{i-1}\\
640 =(j-1)-(j-1)\&2^{i-1}+F_j\&2^{i-1}-F_j\&2^{i-1}+(j-1)\&2^{i-1}\\
644 %In the original situation, the values in $\mathsf{f}$ are effectively distributed according to the standard uniform distribution. Now, there are two values of $j-1$ and no value of $\mathsf{f}_{i,j}$ in $i$-th row of the matrix $\mathsf{f}$. The balance of the outputs distribution has been broken, and they are not uniform any more. To fix it, the output value via $F$ equals $j-1$ should be found and changed it to $\mathsf{f}_{i,j}$, then:
645 The values in $\mathsf{f}$ are uniformly distributed.
646 However, in the new matrix $\mathsf{N}$, there are twice the value $j-1$ and no $\mathsf{f}_{i,j}$ in the $i$-th row: the uniform distribution is lost.
647 To restore the balance, one of the two $j-1$ values must be found and replaced by $\mathsf{f}_{i,j}$. Let $k$ be a variable such that $\mathsf{f}_{i,k}=j-1$ and $\mathsf{f}'_{i,k}=\mathsf{f}_{i,j}$.
649 %\mathsf{f}_{i,k}=j-1,
653 %\mathsf{f}'_{i,k}=\mathsf{f}_{i,j}
656 As the $i-$th bits in ${f}_{i,k}$ and ${f}'_{i,k}$ are equal, we have:
659 \mathsf{f}'_{i,k}\&(2^\mathsf{N}-1-2^{i-1})=\mathsf{f}_{i,j}\&(2^\mathsf{N}-1-2^{i-1}).
662 We can thus transform the equation $\mathsf{f}_{i,k}=j-1$ as follows:
666 \mathsf{f}_{i,k}&=&j-1\\
667 (k-1)-(k-1)\&2^{i-1}+F_k\&2^{i-1}& =&j-1\\
668 (k-1)\&(2^\mathsf{N}-1-2^{i-1})+F_k\&2^{i-1}&=&j-1.
672 Moreover, from $F_k\&2^i = 2^\mathsf{N}-1-k$, we obtain:
676 F_k\&2^{i-1}&=&(j-1)\&2^{i-1} \\
677 (2^\mathsf{N}-k)\&2^i&=&(j-1)\&2^{i-1}\\
678 (k-1)\&2^{i-1}&=&(k-1)\&2^{i-1}-(j-1)\&2^{i-1}.
682 According to Equations (\ref{eq6}) and (\ref{eq7}), we have:
686 k-1=(j-1)+(k-1)\&2^{i-1}-F_k\&2^{i-1}, \\
687 \text{where } F_k=2^\mathsf{N}-k \\
688 =(j-1)+(k-1)\&2^{i-1}-2^{i-1}+(k-1)\&2^{i-1}\\
689 =(j-1)-2^{i-1}+((k-1)\&2^{i-1})*2. \\
690 \text{But, due to Equation} \text{(\ref{eq8})}, \text{ we have:} \\
691 =(j-1)-2^{i-1}+((k-1)\&2^{i-1}-(j-1)\&2^{i-1})*2 \\
692 =(j-1)+2^{i-1}-((j-1)\&2^{i-1})*2 \\
693 =(j-1)+(j-1)\&2^{i-1}+(2^\mathsf{N}-j)\&2^{i-1},\\
694 \text{where } F_j=2^\mathsf{N}-1-j \\
695 =(j-1)+(j-1)\&2^{i-1}+F_j\&2^{i-1} \\
702 \mathsf{f}'_{i,k}=(k-1)-(k-1)\&2^{i-1}+F'_k\&2^{i-1}\\
704 and according to Equation (\ref{eq8}), we thus have:
708 F'_k\&2^{i-1} &=(k-1)\&2^{i-1}.
712 Now, from Equation (\ref{eq5}), we can set that:
716 \mathsf{f}'_{i,k}\&(2^\mathsf{N}-1-2^{i-1})=\mathsf{f}_{i,j}\&(2^\mathsf{N}-1-2^{i-1}) \\
717 \multicolumn{2}{l}{((k-1)-(k-1)\&2^{i-1}+F'_k\&2^{i-1})\&(2^\mathsf{N}-1-2^{i-1})=}\\
718 ~~~~((k-1)-(k-1)\&2^{i-1}+F_k\&2^{i-1})\&(2^\mathsf{N}-1-2^{i-1}) \\
719 F'_k\&(2^\mathsf{N}-1-2^i)=F_k\&(2^\mathsf{N}-1-2^i).\\
723 By using both Equations (\ref{eq9}) and (\ref{eq10}), we obtain:
727 F'_k&=(k-1)\&2^{i-1}+F_k\&(2^\mathsf{N}-1-2^{i-1})\\
728 &=(k-1)\&2^{i-1}+F_k-F_k\&2^{i-1}\\
729 &=\mathsf{f}_{i,j} \& 2^{i-1}+(2^N-1-\mathsf{f}_{i,j})-(2^N-1-\mathsf{f}_{i,j}) \& 2^{i-1}\\
730 &=(2^N-1)-[\mathsf{f}_{i,j}+2^{i-1}-2 \times (\mathsf{f}_{i,j} \& 2^{i-1})] \\
731 &=(2^N-1)-((j-1)-(j-1)\&2^{i-1}+F_j\&2^{i-1}\\
732 &+2^{i-1}-2 \times [((j-1)-(j-1) \& 2^{i-1}+F_j \& 2^{i-1})\& 2^{i-1}\\
733 &=(2^N-1)-((j-1)-(j-1)\&2^{i-1}-F_j\&2^{i-1}+2^{i-1})\\
734 &=(2^N-1)-((j-1)+2^{i-1}\&(2^N-j)-F_j\&2^{i-1})\\
735 &=(2^N-1)-((j-1)+F_j\&2^{i-1}-F_j\&2^{i-1})\\
742 Finally, from Equation (\ref{eq8}), we can conclude that:
746 k&=\mathsf{f}_{i,j}+1\\
747 &=(j-1)-(j-1)\&2^{i-1}+F_j\&2^{i-1} \\
748 &=2^N-(2^N-j)-(j-1)\&2^{i-1}+F_j\&2^{i-1}\\
749 &=2^N-(F_j-F_j\&2^i+(j-1)\&2^{i-1}) \\
755 With such equations (namely, Eq. (\ref{eq11}) and (\ref{eq12})), the balance of the new table can be obtained by computing the mapping values. In other words, there is a bijection from the set A of the inputs $x$ into the set B of $F'(x)$ values.
757 Let us give an example. In Table~\ref{negation_output} is given the mapping matrix for the vectorial Boolean negation, with $\mathsf{N}=4$. Obviously, the values in $\mathsf{f}$ are uniformly distributed: each integer from 0 to 15 occurs once per row.
758 Now, if we desire to set $F'_1$ as 14, then $\mathsf{f}'_{4,1}=0$: there will be two $0$ and no $1$ in the fourth row of $\mathsf{f}'$. Due to the previous study, we know that $F'_2$ must be set to 15 too, which leads to $\mathsf{f}'_{4,2}=1$: the balance is recovered.
760 To sum up, we can determine whether the modification of a bit in the vector of images of the negation function preserves the balance of the outputs or not, by using the following rule (necessary condition):
763 \item then $C = F_j-F_j\&2^{i-1}+(j-1)\&2^{i-1}$,
764 \item and also $F_{2^N-C} = 2^N-j$.
767 This rule, we name it ``Balance Iteration Mapping Rule'', can be used as a criterion to find iterate functions leading to good CIs PRNGS, as it is depicted in Algorithm \ref{Chaotic iteration}.
768 Let us finally remark that, with such a process, it is possible to find new iteration functions by changing more than 1 couple of values in the vectorial Boolean negation $F$. Indeed it is obvious that 2, 3, 4, and even 8 couples of values can be changed using the Balance Iteration Mapping Rule.
769 For instance, Table~\ref{New vectors of images} contains 8 vectors of images obtained by using Algorithm \ref{Chaotic iteration} one or more times. All of these functions satisfy the hypothesis of Theorem \ref{IC_chaotiques} too, and thus their dynamical systems behave chaotically.
774 %\subsection{Algorithm for selection}
776 % \renewcommand{\arraystretch}{1.2}
778 % \label{negation_output}
780 % \begin{tabular}{ccccccccccccccccc}
782 % initial & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ \hline
783 % 1st bit &14&15&11& 13& 10& 11& 8& 9& 6& 7& 4& 5& 2& 3& 0 &1\\
784 % 2nd bit &13&12&15& 14& 9 &8 &11& 10& 5 &4& 7& 6& 1& 0& 3& 2\\
785 % 3rd bit &11&10&9& 8& 15& 14& 13& 12& 3& 2& 1& 0& 7& 6& 5& 4\\
786 % 4th bit &7&6&5& 4& 3& 2& 1& 0& 15& 14& 13& 12& 11& 10& 9& 8\\ \hline
791 \textbf{Input:} a vector of images $F$\\
792 \textbf{Output:} a vector of images $r$ or 0
793 \begin{algorithmic}[1]
794 \FOR{$i=0,\dots,2^\mathsf{N}-1$}
799 \IF{$F(i+1) \neq F(i+1)-F(i+1) \& 2^j+i \& 2^j$}
802 \IF{$F(i+1) \neq 2^N-1-i$}
811 \IF{$F(2^N-F(i)) \neq 2^N-i$}
819 \caption{The Balance Iteration Mapping Rule.}
820 \label{Chaotic iteration}
826 % \KwIn{a vector of images $F$}
827 % \KwOut{a vector of images $r$ or 0}
828 % \For{$i=0,\dots,2^\mathsf{N}-1$}
830 % \For{$j=0,\dots,N$}
833 % \If{$F(i+1) \neq F(i+1)-F(i+1) \& 2^j+i \& 2^j$}
836 % \If{$F(i+1) \neq 2^N-1-i$}
845 % \If{$F(2^N-F(i)) \neq 2^N-i$}
852 % \caption{The Balance Iteration Mapping Rule.}
853 % \label{Chaotic iteration}
860 \begin{tabular}{|c|l|}
864 $F$&[15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0]\\
866 $F'1$&[14,15,13,12,11,10,9,8,7,6,5,4,3,2,1,0]\\
868 $F'2$&[14,15,13,12,9,10,11,8,7,6,5,4,3,2,1,0]\\
870 $F'3$&[14,15,9,4,11,8,13,10,7,6,5,12,3,2,1,0]\\
872 $F'4$&[14,15,9,12,3,8,13,10,7,6,5,4,11,2,1,0]\\
874 $F'5$&[14,15,9,4,11,8,13,10,7,6,5,12,3,2,0,1]\\
876 $F'6$&[14,15,9,4,11,8,13,10,3,6,5,12,7,2,0,1]\\
878 $F'7$&[14,15,9,4,3,8,13,10,5,2,7,12,11,6,1,0]\\
880 $F'8$&[14,15,5,8,9,2,11,12,3,4,13,6,7,10,0,1]\\
883 \caption{New vectors of images}
884 \label{New vectors of images}
887 \section{Statistical analysis}
889 % \subsection{NIST statistical test suite}
890 % Among the numerous standard tests for pseudo-randomness, a convincing way to prove the quality of the produced sequences is to confront them with the NIST (National Institute of Standards and Technology) Statistical Test Suite SP 800-22, %\footnote{A new version of the Statistical Test Suite (Version 2.0) has been released in August 25, 2008.}
893 % The NIST test suite, SP 800-22, is a statistical package consisting of 15 tests. They were developed to measure the randomness of (arbitrarily long) binary sequences produced by either hardware or software based cryptographic pseudorandom number generators. These tests focus on a variety of different types of non-randomness that could occur in such sequences. These 15 tests include in the NIST test suite are described in \cite{nist}.
897 % \subsection{DieHARD battery of tests}
898 % The DieHARD battery of tests has been the most sophisticated standard for over a decade. Because of
899 % the stringent requirements in the DieHARD test suite, a generator passing this battery of
900 % tests can be considered good as a rule of thumb.
902 % The DieHARD battery of tests consists of 18 different independent statistical tests. This collection
903 % of tests is based on assessing the randomness of bits comprising 32-bit integers obtained from
904 % a random number generator. Each test requires $2^{23}$ 32-bit integers in order to run the full set
906 % Most of the tests in DieHARD return a p-value, which should be uniform on $[0,1)$ if the input file
907 % contains truly independent random bits. Those $p-value$s are obtained by
908 % $p=F(X)$, where $F$ is the assumed distribution of the sample random variable $X$ \textendash often normal.
909 % But that assumed $F$ is just an asymptotic approximation, for which the fit will be worst
910 % in the tails. Thus occasional $p-value$s near 0 or 1, such as 0.0012 or 0.9983, can occur.
911 % An individual test is considered to be failed if $p$ value approaches 1 closely, for example $p>0.9999$.
913 % \subsection{Test results}
915 We can conclude from Table \ref{The passing rate12} and Table \ref{The passing rate34}that all of the generators based on the new iterate functions have successfully passed both the NIST and DieHARD batteries of tests.
916 These results show the good statistical properties of the proposed PRNGs, and thus the interest of the theoretical approach presented in this paper.
919 \renewcommand{\arraystretch}{1.3}
920 \caption{Results through NIST SP 800-22 and DieHARD batteries of tests ($\mathbb{P}_T$ values)}
921 \label{The passing rate12}
923 \begin{tabular}{|l@{}||c@{}|c@{}|c@{}|c@{}|}
925 Method & $F'_1$ & $F'_2$ & $F'_3$ & $F'_4$ \\ \hline\hline
928 %$w^{j}$ & $\{1,..,8\}$ & $\{1,..,8\}$ & $\{1,..,8\}$ & $\{1,..,5\}$ & $\{1,..,5\}$ &$\{1,..,5\}$ \\ \hline \hline
929 Frequency (Monobit) Test & 0.102526 & 0.017912 & 0.171867 & 0.779188 \\ \hline
930 Frequency Test within a Block & 0.085587 & 0.657933 & 0.779188 & 0.897763 \\ \hline
931 Cumulative Sums (Cusum) Test* & 0.264576 & 0.185074 & 0.228927 & 0.736333 \\ \hline
932 Runs Test & 0.739918 & 0.334538 & 0.798139 & 0.834308 \\ \hline
933 Test for the Longest Run of Ones in a Block & 0.678686 & 0.474986 & 0.637119 & 0.037566 \\ \hline
934 Binary Matrix Rank Test & 0.816537 & 0.534146 & 0.249284 & 0.883171 \\ \hline
935 Discrete Fourier Transform (Spectral) Test & 0.798139 & 0.474986 & 0.014550 & 0.366918 \\ \hline
936 Non-overlapping Template Matching Test* & 0.489304 & 0.507177 & 0.477005 & 0.557597 \\ \hline
937 Overlapping Template Matching Test & 0.514124 & 0.171867 & 0.162606 & 0.816537 \\ \hline
938 Maurer's ``Universal Statistical`` Test & 0.249284 & 0.171867 & 0.096578 & 0.419021 \\ \hline
939 Approximate Entropy Test & 0.236810 & 0.514124 & 0.262249 & 0.816537 \\ \hline
940 Random Excursions Test* & 0.353142 & 0.403219 & 0.229832 & 0.481025 \\ \hline
941 Random Excursions Variant Test* & 0.412987 & 0.369181 & 0.313171 & 0.513679 \\ \hline
942 Serial Test* (m=10) & 0.304324 & 0.102735 & 0.270033 & 0.384058 \\ \hline
943 Linear Complexity Test &0.759756 & 0.153763 & 0.883171 & 0.171867 \\ \hline
944 Success &15/15&15/15&15/15&15/15 \\ \hline\hline
945 Diehard Test &pass&pass&pass&pass\\ \hline
950 \renewcommand{\arraystretch}{1.3}
951 \caption{Results through NIST SP 800-22 and DieHARD batteries of tests ($\mathbb{P}_T$ values)}
952 \label{The passing rate34}
954 \begin{tabular}{|l@{}|c@{}|c@{}|c@{}|c@{}|}
956 Method & $F'_5$ & $F'_6$ & $F'_7$ & $F'_8$\\ \hline\hline
959 %$w^{j}$ & $\{1,..,8\}$ & $\{1,..,8\}$ & $\{1,..,8\}$ & $\{1,..,5\}$ & $\{1,..,5\}$ &$\{1,..,5\}$ \\ \hline \hline
960 Frequency (Monobit) Test & 0.971699 & 0.275709 & 0.137282 & 0.699313 \\ \hline
961 Frequency Test within a Block & 0.851383 & 0.383827 & 0.262249 & 0.122325 \\ \hline
962 Cumulative Sums (Cusum) Test* & 0.462694 & 0.169816 & 0.391715 & 0.729111\\ \hline
963 Runs Test & 0.153763 & 0.719747 & 0.534146 & 0.262249 \\ \hline
964 Test for the Longest Run of Ones in a Block & 0.366918 & 0.739918 & 0.236810 & 0.759756 \\ \hline
965 Binary Matrix Rank Test & 0.739918 & 0.037566 & 0.798139 & 0.867692 \\ \hline
966 Discrete Fourier Transform (Spectral) Test & 0.595549 & 0.115387 & 0.798139 & 0.153763 \\ \hline
967 Non-overlapping Template Matching Test* & 0.452278 & 0.505673 & 0.541034 & 0.497140 \\ \hline
968 Overlapping Template Matching Test & 0.319084 & 0.678686 & 0.534146 & 0.798139 \\ \hline
969 Maurer's ``Universal Statistical`` Test & 0.171867 & 0.798139 & 0.115387 & 0.275709 \\ \hline
970 Approximate Entropy Test & 0.474986 & 0.080519 & 0.000001 & 0.779188\\ \hline
971 Random Excursions Test* & 0.317506 & 0.602978 & 0.362746 & 0.416274 \\ \hline
972 Random Excursions Variant Test* & 0.274813 & 0.391166 & 0.454157 & 0.341012 \\ \hline
973 Serial Test* (m=10) & 0.456684 & 0.125973 & 0.404429 & 0.253197 \\ \hline
974 Linear Complexity Test & 0.366918 & 0.319084 & 0.678686 & 0.075719 \\ \hline
975 Success &15/15&15/15&15/15&15/15 \\ \hline\hline
976 Diehard Test &pass&pass&pass&pass\\ \hline