\rrbracket^{\mathds{N}} \times \mathds{B}^n$.
Theoretically speaking, such iterations of $F_f$ are thus a formal
model of these kind of recurrent neural networks. In the rest of this
++<<<<<<< HEAD
+paper, we will call such multilayer perceptrons ``CI-MLP($f$)'', which
++=======
+ paper, we will call such multilayer perceptrons ``CI-MLP($f$)'', which
++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
stands for ``Chaotic Iterations based MultiLayer Perceptron''.
Checking if CI-MLP($f$) behaves chaotically according to Devaney's
Thereafter we give, for the different learning setups and data sets,
the mean prediction success rate obtained for each output. Such a rate
++<<<<<<< HEAD
+represents the percentage of input-output pairs belonging to the test
++=======
+ represents the percentage of input-output pairs belonging to the test
++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
subset for which the corresponding output value was correctly
-predicted. These values are computed considering 10~trainings with
+predicted. These values are computed considering 10~trainings with
random subsets construction, weights and biases initialization.
Firstly, neural networks having 10 and 25~hidden neurons are trained,
with a maximum number of epochs that takes its value in
first one the number of inputs follows the increase of the Boolean
vectors coding configurations. In this latter case, the coding gives a
finer information on configuration evolution.
++<<<<<<< HEAD
++=======
+
++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
\begin{table}[b]
\caption{Prediction success rates for configurations expressed with Gray code}
\label{tab2}
systematically. Therefore, we provide a refinement of the second
scheme: each output is learned by a different ANN. Table~\ref{tab3}
presents the results for this approach. In any case, whatever the
++<<<<<<< HEAD
+considered feedforward network topologies, the maximum epoch number,
++=======
+ considered feedforward network topologies, the maximum epoch number,
++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
and the kind of iterations, the configuration success rate is slightly
improved. Moreover, the strategies predictions rates reach almost
12\%, whereas in Table~\ref{tab2} they never exceed 1.5\%. Despite of
when facing a truly chaotic phenomenon. A comparison between learning
rate success and prediction quality will be realized. Concrete
consequences in biology, physics, and computer science security fields
++<<<<<<< HEAD
+will then be stated.
++=======
+ will then be stated.
++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
% \appendix{}