compute the new output one $\left(x^{t+1}_1,\dots,x^{t+1}_n\right)$.
While the remaining input receives a new integer value $S^t \in
\llbracket1;n\rrbracket$, which is provided by the outside world.
-\JFC{en dire davantage sur l'outside world}
\end{itemize}
The topological behavior of these particular neural networks can be
\rrbracket^{\mathds{N}} \times \mathds{B}^n$.
Theoretically speaking, such iterations of $F_f$ are thus a formal
model of these kind of recurrent neural networks. In the rest of this
-paper, we will call such multilayer perceptrons CI-MLP($f$), which
+paper, we will call such multilayer perceptrons ``CI-MLP($f$)'', which
stands for ``Chaotic Iterations based MultiLayer Perceptron''.
Checking if CI-MLP($f$) behaves chaotically according to Devaney's
is intrinsically complicated and it cannot be decomposed or
simplified.
-Furthermore, those recurrent neural networks exhibit the instability
+Furthermore, these recurrent neural networks exhibit the instability
property:
\begin{definition}
A dynamical system $\left( \mathcal{X}, f\right)$ is {\bf unstable}
are compared.
Thereafter we give, for the different learning setups and data sets,
-the mean prediction success rate obtained for each output. A such rate
-represent the percentage of input-output pairs belonging to the test
+the mean prediction success rate obtained for each output. Such a rate
+represents the percentage of input-output pairs belonging to the test
subset for which the corresponding output value was correctly
predicted. These values are computed considering 10~trainings with
random subsets construction, weights and biases initialization.
\centering {\small
\begin{tabular}{|c|c||c|c|c|}
\hline
-\multicolumn{5}{|c|}{Networks topology: 6~inputs, 5~outputs and one hidden layer} \\
+\multicolumn{5}{|c|}{Networks topology: 6~inputs, 5~outputs, and one hidden layer} \\
\hline
\hline
\multicolumn{2}{|c||}{Hidden neurons} & \multicolumn{3}{c|}{10 neurons} \\
25~neurons and 500~epochs). We also notice that the learning of
outputs~(2) and~(3) is more difficult. Conversely, for the
non-chaotic case the simplest training setup is enough to predict
-configurations. For all those feedforward network topologies and all
+configurations. For all these feedforward network topologies and all
outputs the obtained results for the non-chaotic case outperform the
chaotic ones. Finally, the rates for the strategies show that the
different networks are unable to learn them.
first one the number of inputs follows the increase of the Boolean
vectors coding configurations. In this latter case, the coding gives a
finer information on configuration evolution.
-\JFC{Je n'ai pas compris le paragraphe precedent. Devrait être repris}
+
\begin{table}[b]
\caption{Prediction success rates for configurations expressed with Gray code}
\label{tab2}
\centering
\begin{tabular}{|c|c||c|c|c|}
\hline
-\multicolumn{5}{|c|}{Networks topology: 3~inputs, 2~outputs and one hidden layer} \\
+\multicolumn{5}{|c|}{Networks topology: 3~inputs, 2~outputs, and one hidden layer} \\
\hline
\hline
& Hidden neurons & \multicolumn{3}{c|}{10 neurons} \\
systematically. Therefore, we provide a refinement of the second
scheme: each output is learned by a different ANN. Table~\ref{tab3}
presents the results for this approach. In any case, whatever the
-considered feedforward network topologies, the maximum epoch number
+considered feedforward network topologies, the maximum epoch number,
and the kind of iterations, the configuration success rate is slightly
improved. Moreover, the strategies predictions rates reach almost
12\%, whereas in Table~\ref{tab2} they never exceed 1.5\%. Despite of
\centering
\begin{tabular}{|c||c|c|c|}
\hline
-\multicolumn{4}{|c|}{Networks topology: 3~inputs, 1~output and one hidden layer} \\
+\multicolumn{4}{|c|}{Networks topology: 3~inputs, 1~output, and one hidden layer} \\
\hline
\hline
Epochs & 125 & 250 & 500 \\
when facing a truly chaotic phenomenon. A comparison between learning
rate success and prediction quality will be realized. Concrete
consequences in biology, physics, and computer science security fields
-will be stated.
+will then be stated.
% \appendix{}