-configurations. For all network topologies and all outputs the
-obtained results for the non-chaotic case outperform the chaotic
-ones. Finally, the rates for the strategies show that the different
-networks are unable to learn them.
-
-For the second coding scheme (\textit{i.e.}, with Gray Codes)
-Table~\ref{tab2} shows that any network
-learns about five times more non-chaotic configurations than chaotic
-ones. As in the previous scheme, the strategies cannot be predicted.
+configurations. For all these feedforward network topologies and all
+outputs the obtained results for the non-chaotic case outperform the
+chaotic ones. Finally, the rates for the strategies show that the
+different feedforward networks are unable to learn them.
+
+For the second coding scheme (\textit{i.e.}, with Gray Codes)
+Table~\ref{tab2} shows that any network learns about five times more
+non-chaotic configurations than chaotic ones. As in the previous
+scheme, the strategies cannot be predicted.
+Figures~\ref{Fig:chaotic_predictions} and
+\ref{Fig:non-chaotic_predictions} present the predictions given by two
+feedforward multilayer perceptrons that were respectively trained to
+learn chaotic and non-chaotic data, using the second coding scheme.
+Each figure shows for each sample of the test subset (577~samples,
+representing 25\% of the 2304~samples) the configuration that should
+have been predicted and the one given by the multilayer perceptron. It
+can be seen that for the chaotic data the predictions are far away
+from the expected configurations. Obviously, the better predictions
+for the non-chaotic data reflect their regularity.